00:00:00.001 Started by upstream project "autotest-per-patch" build number 131149 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.101 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.191 Fetching changes from the remote Git repository 00:00:00.193 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.276 Using shallow fetch with depth 1 00:00:00.276 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.276 > git --version # timeout=10 00:00:00.349 > git --version # 'git version 2.39.2' 00:00:00.349 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.397 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.397 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.939 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.953 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.969 Checking out Revision bb1b9bfed281c179b06b3c39bbc702302ccac514 (FETCH_HEAD) 00:00:06.969 > git config core.sparsecheckout # timeout=10 00:00:06.981 > git read-tree -mu HEAD # timeout=10 00:00:06.999 > git checkout -f bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=5 00:00:07.019 Commit message: "scripts/kid: add issue 3551" 00:00:07.019 > git rev-list --no-walk bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=10 00:00:07.113 [Pipeline] Start of Pipeline 00:00:07.128 [Pipeline] library 00:00:07.130 Loading library shm_lib@master 00:00:07.130 Library shm_lib@master is cached. Copying from home. 00:00:07.148 [Pipeline] node 00:00:22.150 Still waiting to schedule task 00:00:22.150 Waiting for next available executor on ‘vagrant-vm-host’ 00:20:53.241 Running on VM-host-SM0 in /var/jenkins/workspace/nvme-vg-autotest_2 00:20:53.242 [Pipeline] { 00:20:53.254 [Pipeline] catchError 00:20:53.256 [Pipeline] { 00:20:53.273 [Pipeline] wrap 00:20:53.279 [Pipeline] { 00:20:53.288 [Pipeline] stage 00:20:53.290 [Pipeline] { (Prologue) 00:20:53.310 [Pipeline] echo 00:20:53.311 Node: VM-host-SM0 00:20:53.318 [Pipeline] cleanWs 00:20:53.327 [WS-CLEANUP] Deleting project workspace... 00:20:53.327 [WS-CLEANUP] Deferred wipeout is used... 00:20:53.332 [WS-CLEANUP] done 00:20:53.522 [Pipeline] setCustomBuildProperty 00:20:53.604 [Pipeline] httpRequest 00:20:54.065 [Pipeline] echo 00:20:54.067 Sorcerer 10.211.164.101 is alive 00:20:54.077 [Pipeline] retry 00:20:54.079 [Pipeline] { 00:20:54.093 [Pipeline] httpRequest 00:20:54.097 HttpMethod: GET 00:20:54.098 URL: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:20:54.099 Sending request to url: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:20:54.122 Response Code: HTTP/1.1 200 OK 00:20:54.123 Success: Status code 200 is in the accepted range: 200,404 00:20:54.124 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:21:12.266 [Pipeline] } 00:21:12.285 [Pipeline] // retry 00:21:12.293 [Pipeline] sh 00:21:12.572 + tar --no-same-owner -xf jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:21:12.587 [Pipeline] httpRequest 00:21:12.997 [Pipeline] echo 00:21:12.999 Sorcerer 10.211.164.101 is alive 00:21:13.008 [Pipeline] retry 00:21:13.010 [Pipeline] { 00:21:13.024 [Pipeline] httpRequest 00:21:13.029 HttpMethod: GET 00:21:13.029 URL: http://10.211.164.101/packages/spdk_d056e75889972bc9be26c7fee90240bc16303b37.tar.gz 00:21:13.030 Sending request to url: http://10.211.164.101/packages/spdk_d056e75889972bc9be26c7fee90240bc16303b37.tar.gz 00:21:13.040 Response Code: HTTP/1.1 200 OK 00:21:13.040 Success: Status code 200 is in the accepted range: 200,404 00:21:13.041 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_d056e75889972bc9be26c7fee90240bc16303b37.tar.gz 00:21:37.993 [Pipeline] } 00:21:38.010 [Pipeline] // retry 00:21:38.017 [Pipeline] sh 00:21:38.295 + tar --no-same-owner -xf spdk_d056e75889972bc9be26c7fee90240bc16303b37.tar.gz 00:21:41.589 [Pipeline] sh 00:21:41.870 + git -C spdk log --oneline -n5 00:21:41.870 d056e7588 nvme: Use spdk_nvme_trtype_is_fabrics() in CTRLR_STRING() 00:21:41.870 a73e7d07f nvme: Move NVME_CTRLR_*LOG() to nvme_internal.h 00:21:41.870 b0d1eb075 bdev/nvme: Add NVME_BDEV_*LOG() to identify nvme_bdev 00:21:41.870 7242c5e21 bdev/nvme: Add more logs for spdk_bdev_reset 00:21:41.870 9faa6af35 bdev/nvme: Change NVME_CTRLR_DEBUGLOG() to _INFOLOG() for ctrlr reset 00:21:41.889 [Pipeline] writeFile 00:21:41.904 [Pipeline] sh 00:21:42.242 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:21:42.256 [Pipeline] sh 00:21:42.541 + cat autorun-spdk.conf 00:21:42.541 SPDK_RUN_FUNCTIONAL_TEST=1 00:21:42.541 SPDK_TEST_NVME=1 00:21:42.541 SPDK_TEST_FTL=1 00:21:42.541 SPDK_TEST_ISAL=1 00:21:42.541 SPDK_RUN_ASAN=1 00:21:42.541 SPDK_RUN_UBSAN=1 00:21:42.541 SPDK_TEST_XNVME=1 00:21:42.541 SPDK_TEST_NVME_FDP=1 00:21:42.541 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:21:42.547 RUN_NIGHTLY=0 00:21:42.549 [Pipeline] } 00:21:42.562 [Pipeline] // stage 00:21:42.576 [Pipeline] stage 00:21:42.578 [Pipeline] { (Run VM) 00:21:42.590 [Pipeline] sh 00:21:42.870 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:21:42.870 + echo 'Start stage prepare_nvme.sh' 00:21:42.870 Start stage prepare_nvme.sh 00:21:42.870 + [[ -n 1 ]] 00:21:42.870 + disk_prefix=ex1 00:21:42.870 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:21:42.870 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:21:42.870 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:21:42.870 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:21:42.870 ++ SPDK_TEST_NVME=1 00:21:42.870 ++ SPDK_TEST_FTL=1 00:21:42.870 ++ SPDK_TEST_ISAL=1 00:21:42.870 ++ SPDK_RUN_ASAN=1 00:21:42.870 ++ SPDK_RUN_UBSAN=1 00:21:42.870 ++ SPDK_TEST_XNVME=1 00:21:42.870 ++ SPDK_TEST_NVME_FDP=1 00:21:42.870 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:21:42.870 ++ RUN_NIGHTLY=0 00:21:42.870 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:21:42.870 + nvme_files=() 00:21:42.870 + declare -A nvme_files 00:21:42.870 + backend_dir=/var/lib/libvirt/images/backends 00:21:42.870 + nvme_files['nvme.img']=5G 00:21:42.870 + nvme_files['nvme-cmb.img']=5G 00:21:42.870 + nvme_files['nvme-multi0.img']=4G 00:21:42.870 + nvme_files['nvme-multi1.img']=4G 00:21:42.870 + nvme_files['nvme-multi2.img']=4G 00:21:42.870 + nvme_files['nvme-openstack.img']=8G 00:21:42.870 + nvme_files['nvme-zns.img']=5G 00:21:42.870 + (( SPDK_TEST_NVME_PMR == 1 )) 00:21:42.870 + (( SPDK_TEST_FTL == 1 )) 00:21:42.870 + nvme_files["nvme-ftl.img"]=6G 00:21:42.870 + (( SPDK_TEST_NVME_FDP == 1 )) 00:21:42.870 + nvme_files["nvme-fdp.img"]=1G 00:21:42.870 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:21:42.870 + for nvme in "${!nvme_files[@]}" 00:21:42.870 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:21:42.870 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:21:42.870 + for nvme in "${!nvme_files[@]}" 00:21:42.870 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:21:42.870 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:21:42.870 + for nvme in "${!nvme_files[@]}" 00:21:42.870 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:21:42.870 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:21:42.870 + for nvme in "${!nvme_files[@]}" 00:21:42.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:21:43.129 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:21:43.129 + for nvme in "${!nvme_files[@]}" 00:21:43.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:21:43.129 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:21:43.129 + for nvme in "${!nvme_files[@]}" 00:21:43.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:21:43.129 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:21:43.129 + for nvme in "${!nvme_files[@]}" 00:21:43.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:21:43.129 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:21:43.129 + for nvme in "${!nvme_files[@]}" 00:21:43.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:21:43.129 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:21:43.129 + for nvme in "${!nvme_files[@]}" 00:21:43.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:21:43.387 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:21:43.387 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:21:43.387 + echo 'End stage prepare_nvme.sh' 00:21:43.387 End stage prepare_nvme.sh 00:21:43.397 [Pipeline] sh 00:21:43.675 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:21:43.675 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:21:43.675 00:21:43.675 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:21:43.675 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:21:43.675 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:21:43.675 HELP=0 00:21:43.675 DRY_RUN=0 00:21:43.675 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:21:43.675 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:21:43.675 NVME_AUTO_CREATE=0 00:21:43.675 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:21:43.675 NVME_CMB=,,,, 00:21:43.675 NVME_PMR=,,,, 00:21:43.675 NVME_ZNS=,,,, 00:21:43.675 NVME_MS=true,,,, 00:21:43.675 NVME_FDP=,,,on, 00:21:43.675 SPDK_VAGRANT_DISTRO=fedora39 00:21:43.675 SPDK_VAGRANT_VMCPU=10 00:21:43.675 SPDK_VAGRANT_VMRAM=12288 00:21:43.675 SPDK_VAGRANT_PROVIDER=libvirt 00:21:43.675 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:21:43.675 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:21:43.675 SPDK_OPENSTACK_NETWORK=0 00:21:43.675 VAGRANT_PACKAGE_BOX=0 00:21:43.675 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:21:43.675 FORCE_DISTRO=true 00:21:43.675 VAGRANT_BOX_VERSION= 00:21:43.675 EXTRA_VAGRANTFILES= 00:21:43.675 NIC_MODEL=e1000 00:21:43.675 00:21:43.675 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:21:43.675 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:21:46.958 Bringing machine 'default' up with 'libvirt' provider... 00:21:47.891 ==> default: Creating image (snapshot of base box volume). 00:21:48.149 ==> default: Creating domain with the following settings... 00:21:48.149 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728957056_5251f18d8ea0cd8d9f04 00:21:48.149 ==> default: -- Domain type: kvm 00:21:48.149 ==> default: -- Cpus: 10 00:21:48.149 ==> default: -- Feature: acpi 00:21:48.149 ==> default: -- Feature: apic 00:21:48.149 ==> default: -- Feature: pae 00:21:48.149 ==> default: -- Memory: 12288M 00:21:48.149 ==> default: -- Memory Backing: hugepages: 00:21:48.149 ==> default: -- Management MAC: 00:21:48.149 ==> default: -- Loader: 00:21:48.149 ==> default: -- Nvram: 00:21:48.149 ==> default: -- Base box: spdk/fedora39 00:21:48.149 ==> default: -- Storage pool: default 00:21:48.149 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728957056_5251f18d8ea0cd8d9f04.img (20G) 00:21:48.149 ==> default: -- Volume Cache: default 00:21:48.149 ==> default: -- Kernel: 00:21:48.149 ==> default: -- Initrd: 00:21:48.149 ==> default: -- Graphics Type: vnc 00:21:48.149 ==> default: -- Graphics Port: -1 00:21:48.149 ==> default: -- Graphics IP: 127.0.0.1 00:21:48.149 ==> default: -- Graphics Password: Not defined 00:21:48.149 ==> default: -- Video Type: cirrus 00:21:48.149 ==> default: -- Video VRAM: 9216 00:21:48.149 ==> default: -- Sound Type: 00:21:48.149 ==> default: -- Keymap: en-us 00:21:48.149 ==> default: -- TPM Path: 00:21:48.149 ==> default: -- INPUT: type=mouse, bus=ps2 00:21:48.149 ==> default: -- Command line args: 00:21:48.149 ==> default: -> value=-device, 00:21:48.149 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:21:48.149 ==> default: -> value=-drive, 00:21:48.149 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:21:48.149 ==> default: -> value=-device, 00:21:48.149 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:21:48.149 ==> default: -> value=-device, 00:21:48.149 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:21:48.149 ==> default: -> value=-drive, 00:21:48.149 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:21:48.149 ==> default: -> value=-device, 00:21:48.149 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:48.149 ==> default: -> value=-device, 00:21:48.149 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:21:48.149 ==> default: -> value=-drive, 00:21:48.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:21:48.407 ==> default: -> value=-device, 00:21:48.407 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:48.407 ==> default: -> value=-drive, 00:21:48.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:21:48.407 ==> default: -> value=-device, 00:21:48.407 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:48.407 ==> default: -> value=-drive, 00:21:48.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:21:48.407 ==> default: -> value=-device, 00:21:48.407 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:48.407 ==> default: -> value=-device, 00:21:48.407 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:21:48.407 ==> default: -> value=-device, 00:21:48.407 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:21:48.407 ==> default: -> value=-drive, 00:21:48.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:21:48.407 ==> default: -> value=-device, 00:21:48.407 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:48.407 ==> default: Creating shared folders metadata... 00:21:48.407 ==> default: Starting domain. 00:21:50.939 ==> default: Waiting for domain to get an IP address... 00:22:09.051 ==> default: Waiting for SSH to become available... 00:22:09.051 ==> default: Configuring and enabling network interfaces... 00:22:12.330 default: SSH address: 192.168.121.74:22 00:22:12.330 default: SSH username: vagrant 00:22:12.330 default: SSH auth method: private key 00:22:14.231 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:22:22.390 ==> default: Mounting SSHFS shared folder... 00:22:23.763 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:22:23.763 ==> default: Checking Mount.. 00:22:24.721 ==> default: Folder Successfully Mounted! 00:22:24.721 ==> default: Running provisioner: file... 00:22:25.286 default: ~/.gitconfig => .gitconfig 00:22:25.543 00:22:25.543 SUCCESS! 00:22:25.543 00:22:25.543 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:22:25.543 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:22:25.543 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:22:25.543 00:22:25.551 [Pipeline] } 00:22:25.565 [Pipeline] // stage 00:22:25.574 [Pipeline] dir 00:22:25.575 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:22:25.576 [Pipeline] { 00:22:25.588 [Pipeline] catchError 00:22:25.590 [Pipeline] { 00:22:25.603 [Pipeline] sh 00:22:25.880 + vagrant ssh-config --host vagrant 00:22:25.880 + sed -ne /^Host/,$p 00:22:25.880 + tee ssh_conf 00:22:30.059 Host vagrant 00:22:30.059 HostName 192.168.121.74 00:22:30.059 User vagrant 00:22:30.059 Port 22 00:22:30.059 UserKnownHostsFile /dev/null 00:22:30.059 StrictHostKeyChecking no 00:22:30.059 PasswordAuthentication no 00:22:30.059 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:22:30.059 IdentitiesOnly yes 00:22:30.059 LogLevel FATAL 00:22:30.059 ForwardAgent yes 00:22:30.059 ForwardX11 yes 00:22:30.059 00:22:30.073 [Pipeline] withEnv 00:22:30.075 [Pipeline] { 00:22:30.088 [Pipeline] sh 00:22:30.366 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:22:30.366 source /etc/os-release 00:22:30.366 [[ -e /image.version ]] && img=$(< /image.version) 00:22:30.366 # Minimal, systemd-like check. 00:22:30.366 if [[ -e /.dockerenv ]]; then 00:22:30.366 # Clear garbage from the node's name: 00:22:30.366 # agt-er_autotest_547-896 -> autotest_547-896 00:22:30.366 # $HOSTNAME is the actual container id 00:22:30.366 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:22:30.366 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:22:30.366 # We can assume this is a mount from a host where container is running, 00:22:30.366 # so fetch its hostname to easily identify the target swarm worker. 00:22:30.366 container="$(< /etc/hostname) ($agent)" 00:22:30.366 else 00:22:30.366 # Fallback 00:22:30.366 container=$agent 00:22:30.366 fi 00:22:30.366 fi 00:22:30.366 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:22:30.366 00:22:30.637 [Pipeline] } 00:22:30.656 [Pipeline] // withEnv 00:22:30.666 [Pipeline] setCustomBuildProperty 00:22:30.683 [Pipeline] stage 00:22:30.686 [Pipeline] { (Tests) 00:22:30.706 [Pipeline] sh 00:22:30.985 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:22:31.256 [Pipeline] sh 00:22:31.538 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:22:31.816 [Pipeline] timeout 00:22:31.816 Timeout set to expire in 50 min 00:22:31.818 [Pipeline] { 00:22:31.835 [Pipeline] sh 00:22:32.148 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:22:32.715 HEAD is now at d056e7588 nvme: Use spdk_nvme_trtype_is_fabrics() in CTRLR_STRING() 00:22:32.727 [Pipeline] sh 00:22:33.006 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:22:33.278 [Pipeline] sh 00:22:33.558 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:22:33.831 [Pipeline] sh 00:22:34.120 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:22:34.378 ++ readlink -f spdk_repo 00:22:34.378 + DIR_ROOT=/home/vagrant/spdk_repo 00:22:34.378 + [[ -n /home/vagrant/spdk_repo ]] 00:22:34.378 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:22:34.378 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:22:34.378 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:22:34.378 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:22:34.378 + [[ -d /home/vagrant/spdk_repo/output ]] 00:22:34.378 + [[ nvme-vg-autotest == pkgdep-* ]] 00:22:34.378 + cd /home/vagrant/spdk_repo 00:22:34.378 + source /etc/os-release 00:22:34.378 ++ NAME='Fedora Linux' 00:22:34.378 ++ VERSION='39 (Cloud Edition)' 00:22:34.378 ++ ID=fedora 00:22:34.378 ++ VERSION_ID=39 00:22:34.378 ++ VERSION_CODENAME= 00:22:34.378 ++ PLATFORM_ID=platform:f39 00:22:34.378 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:22:34.378 ++ ANSI_COLOR='0;38;2;60;110;180' 00:22:34.378 ++ LOGO=fedora-logo-icon 00:22:34.378 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:22:34.378 ++ HOME_URL=https://fedoraproject.org/ 00:22:34.378 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:22:34.378 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:22:34.378 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:22:34.378 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:22:34.378 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:22:34.378 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:22:34.378 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:22:34.378 ++ SUPPORT_END=2024-11-12 00:22:34.378 ++ VARIANT='Cloud Edition' 00:22:34.378 ++ VARIANT_ID=cloud 00:22:34.378 + uname -a 00:22:34.378 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:22:34.378 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:22:34.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:34.895 Hugepages 00:22:34.895 node hugesize free / total 00:22:34.895 node0 1048576kB 0 / 0 00:22:34.895 node0 2048kB 0 / 0 00:22:34.895 00:22:34.895 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:34.895 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:22:35.153 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:22:35.153 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:22:35.153 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:22:35.153 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:22:35.153 + rm -f /tmp/spdk-ld-path 00:22:35.153 + source autorun-spdk.conf 00:22:35.153 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:35.153 ++ SPDK_TEST_NVME=1 00:22:35.153 ++ SPDK_TEST_FTL=1 00:22:35.153 ++ SPDK_TEST_ISAL=1 00:22:35.153 ++ SPDK_RUN_ASAN=1 00:22:35.153 ++ SPDK_RUN_UBSAN=1 00:22:35.153 ++ SPDK_TEST_XNVME=1 00:22:35.153 ++ SPDK_TEST_NVME_FDP=1 00:22:35.153 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:35.153 ++ RUN_NIGHTLY=0 00:22:35.153 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:22:35.153 + [[ -n '' ]] 00:22:35.153 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:22:35.153 + for M in /var/spdk/build-*-manifest.txt 00:22:35.153 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:22:35.153 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:35.153 + for M in /var/spdk/build-*-manifest.txt 00:22:35.153 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:22:35.153 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:35.153 + for M in /var/spdk/build-*-manifest.txt 00:22:35.153 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:22:35.153 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:35.153 ++ uname 00:22:35.153 + [[ Linux == \L\i\n\u\x ]] 00:22:35.153 + sudo dmesg -T 00:22:35.153 + sudo dmesg --clear 00:22:35.153 + dmesg_pid=5294 00:22:35.153 + [[ Fedora Linux == FreeBSD ]] 00:22:35.153 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:35.153 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:35.153 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:22:35.153 + [[ -x /usr/src/fio-static/fio ]] 00:22:35.153 + sudo dmesg -Tw 00:22:35.153 + export FIO_BIN=/usr/src/fio-static/fio 00:22:35.153 + FIO_BIN=/usr/src/fio-static/fio 00:22:35.153 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:22:35.153 + [[ ! -v VFIO_QEMU_BIN ]] 00:22:35.153 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:22:35.153 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:35.153 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:35.153 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:22:35.153 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:35.153 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:35.153 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:35.153 Test configuration: 00:22:35.153 SPDK_RUN_FUNCTIONAL_TEST=1 00:22:35.153 SPDK_TEST_NVME=1 00:22:35.153 SPDK_TEST_FTL=1 00:22:35.153 SPDK_TEST_ISAL=1 00:22:35.153 SPDK_RUN_ASAN=1 00:22:35.153 SPDK_RUN_UBSAN=1 00:22:35.153 SPDK_TEST_XNVME=1 00:22:35.153 SPDK_TEST_NVME_FDP=1 00:22:35.153 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:35.411 RUN_NIGHTLY=0 01:51:44 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:22:35.411 01:51:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:35.411 01:51:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:22:35.411 01:51:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:35.411 01:51:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.411 01:51:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.411 01:51:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 01:51:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 01:51:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 01:51:44 -- paths/export.sh@5 -- $ export PATH 00:22:35.412 01:51:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.412 01:51:44 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:35.412 01:51:44 -- common/autobuild_common.sh@486 -- $ date +%s 00:22:35.412 01:51:44 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728957104.XXXXXX 00:22:35.412 01:51:44 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728957104.OlhtEx 00:22:35.412 01:51:44 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:22:35.412 01:51:44 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:22:35.412 01:51:44 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:35.412 01:51:44 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:35.412 01:51:44 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:35.412 01:51:44 -- common/autobuild_common.sh@502 -- $ get_config_params 00:22:35.412 01:51:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:22:35.412 01:51:44 -- common/autotest_common.sh@10 -- $ set +x 00:22:35.412 01:51:44 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:22:35.412 01:51:44 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:22:35.412 01:51:44 -- pm/common@17 -- $ local monitor 00:22:35.412 01:51:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:35.412 01:51:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:35.412 01:51:44 -- pm/common@25 -- $ sleep 1 00:22:35.412 01:51:44 -- pm/common@21 -- $ date +%s 00:22:35.412 01:51:44 -- pm/common@21 -- $ date +%s 00:22:35.412 01:51:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728957104 00:22:35.412 01:51:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728957104 00:22:35.412 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728957104_collect-cpu-load.pm.log 00:22:35.412 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728957104_collect-vmstat.pm.log 00:22:36.346 01:51:45 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:22:36.346 01:51:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:22:36.346 01:51:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:22:36.346 01:51:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:36.346 01:51:45 -- spdk/autobuild.sh@16 -- $ date -u 00:22:36.346 Tue Oct 15 01:51:45 AM UTC 2024 00:22:36.346 01:51:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:22:36.346 v25.01-pre-47-gd056e7588 00:22:36.346 01:51:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:22:36.346 01:51:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:22:36.346 01:51:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:22:36.346 01:51:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:22:36.346 01:51:45 -- common/autotest_common.sh@10 -- $ set +x 00:22:36.346 ************************************ 00:22:36.346 START TEST asan 00:22:36.346 ************************************ 00:22:36.346 using asan 00:22:36.346 01:51:45 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:22:36.346 00:22:36.346 real 0m0.000s 00:22:36.346 user 0m0.000s 00:22:36.346 sys 0m0.000s 00:22:36.346 01:51:45 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:22:36.346 ************************************ 00:22:36.346 END TEST asan 00:22:36.346 ************************************ 00:22:36.346 01:51:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:22:36.346 01:51:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:22:36.346 01:51:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:22:36.346 01:51:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:22:36.346 01:51:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:22:36.346 01:51:45 -- common/autotest_common.sh@10 -- $ set +x 00:22:36.346 ************************************ 00:22:36.346 START TEST ubsan 00:22:36.346 ************************************ 00:22:36.346 using ubsan 00:22:36.346 01:51:45 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:22:36.346 00:22:36.346 real 0m0.000s 00:22:36.346 user 0m0.000s 00:22:36.346 sys 0m0.000s 00:22:36.346 01:51:45 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:22:36.346 ************************************ 00:22:36.346 END TEST ubsan 00:22:36.346 ************************************ 00:22:36.346 01:51:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:22:36.346 01:51:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:22:36.346 01:51:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:22:36.346 01:51:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:22:36.346 01:51:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:22:36.346 01:51:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:22:36.346 01:51:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:22:36.346 01:51:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:22:36.346 01:51:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:22:36.346 01:51:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:22:36.605 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:36.605 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:22:37.171 Using 'verbs' RDMA provider 00:22:50.745 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:23:05.644 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:23:05.644 Creating mk/config.mk...done. 00:23:05.644 Creating mk/cc.flags.mk...done. 00:23:05.644 Type 'make' to build. 00:23:05.644 01:52:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:23:05.644 01:52:13 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:23:05.644 01:52:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:23:05.644 01:52:13 -- common/autotest_common.sh@10 -- $ set +x 00:23:05.644 ************************************ 00:23:05.644 START TEST make 00:23:05.644 ************************************ 00:23:05.644 01:52:13 make -- common/autotest_common.sh@1125 -- $ make -j10 00:23:05.644 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:23:05.644 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:23:05.644 meson setup builddir \ 00:23:05.644 -Dwith-libaio=enabled \ 00:23:05.644 -Dwith-liburing=enabled \ 00:23:05.644 -Dwith-libvfn=disabled \ 00:23:05.644 -Dwith-spdk=false && \ 00:23:05.644 meson compile -C builddir && \ 00:23:05.644 cd -) 00:23:05.644 make[1]: Nothing to be done for 'all'. 00:23:07.545 The Meson build system 00:23:07.545 Version: 1.5.0 00:23:07.545 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:23:07.545 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:23:07.545 Build type: native build 00:23:07.545 Project name: xnvme 00:23:07.545 Project version: 0.7.3 00:23:07.545 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:23:07.545 C linker for the host machine: cc ld.bfd 2.40-14 00:23:07.545 Host machine cpu family: x86_64 00:23:07.545 Host machine cpu: x86_64 00:23:07.545 Message: host_machine.system: linux 00:23:07.545 Compiler for C supports arguments -Wno-missing-braces: YES 00:23:07.545 Compiler for C supports arguments -Wno-cast-function-type: YES 00:23:07.545 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:23:07.545 Run-time dependency threads found: YES 00:23:07.545 Has header "setupapi.h" : NO 00:23:07.545 Has header "linux/blkzoned.h" : YES 00:23:07.545 Has header "linux/blkzoned.h" : YES (cached) 00:23:07.545 Has header "libaio.h" : YES 00:23:07.545 Library aio found: YES 00:23:07.545 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:23:07.545 Run-time dependency liburing found: YES 2.2 00:23:07.545 Dependency libvfn skipped: feature with-libvfn disabled 00:23:07.545 Run-time dependency appleframeworks found: NO (tried framework) 00:23:07.545 Run-time dependency appleframeworks found: NO (tried framework) 00:23:07.545 Configuring xnvme_config.h using configuration 00:23:07.545 Configuring xnvme.spec using configuration 00:23:07.545 Run-time dependency bash-completion found: YES 2.11 00:23:07.545 Message: Bash-completions: /usr/share/bash-completion/completions 00:23:07.545 Program cp found: YES (/usr/bin/cp) 00:23:07.545 Has header "winsock2.h" : NO 00:23:07.545 Has header "dbghelp.h" : NO 00:23:07.545 Library rpcrt4 found: NO 00:23:07.545 Library rt found: YES 00:23:07.545 Checking for function "clock_gettime" with dependency -lrt: YES 00:23:07.545 Found CMake: /usr/bin/cmake (3.27.7) 00:23:07.545 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:23:07.545 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:23:07.545 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:23:07.545 Build targets in project: 32 00:23:07.545 00:23:07.545 xnvme 0.7.3 00:23:07.545 00:23:07.545 User defined options 00:23:07.545 with-libaio : enabled 00:23:07.545 with-liburing: enabled 00:23:07.545 with-libvfn : disabled 00:23:07.545 with-spdk : false 00:23:07.545 00:23:07.545 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:08.112 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:23:08.112 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:23:08.112 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:23:08.112 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:23:08.112 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:23:08.112 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:23:08.112 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:23:08.112 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:23:08.112 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:23:08.112 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:23:08.112 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:23:08.112 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:23:08.112 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:23:08.112 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:23:08.370 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:23:08.370 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:23:08.370 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:23:08.370 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:23:08.370 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:23:08.370 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:23:08.370 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:23:08.370 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:23:08.370 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:23:08.370 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:23:08.370 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:23:08.370 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:23:08.370 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:23:08.370 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:23:08.370 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:23:08.370 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:23:08.370 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:23:08.370 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:23:08.370 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:23:08.370 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:23:08.370 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:23:08.370 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:23:08.370 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:23:08.370 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:23:08.371 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:23:08.371 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:23:08.371 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:23:08.371 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:23:08.371 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:23:08.629 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:23:08.629 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:23:08.629 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:23:08.629 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:23:08.629 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:23:08.629 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:23:08.629 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:23:08.629 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:23:08.629 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:23:08.629 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:23:08.629 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:23:08.629 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:23:08.629 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:23:08.629 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:23:08.629 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:23:08.629 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:23:08.629 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:23:08.629 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:23:08.629 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:23:08.629 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:23:08.629 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:23:08.629 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:23:08.887 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:23:08.887 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:23:08.887 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:23:08.887 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:23:08.887 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:23:08.887 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:23:08.887 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:23:08.887 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:23:08.887 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:23:08.887 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:23:08.887 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:23:08.887 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:23:08.887 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:23:08.887 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:23:08.887 [79/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:23:08.887 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:23:08.887 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:23:09.144 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:23:09.144 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:23:09.144 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:23:09.144 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:23:09.144 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:23:09.144 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:23:09.144 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:23:09.144 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:23:09.144 [90/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:23:09.144 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:23:09.144 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:23:09.144 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:23:09.144 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:23:09.144 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:23:09.144 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:23:09.402 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:23:09.402 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:23:09.402 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:23:09.402 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:23:09.402 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:23:09.402 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:23:09.403 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:23:09.403 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:23:09.403 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:23:09.403 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:23:09.403 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:23:09.403 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:23:09.403 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:23:09.403 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:23:09.403 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:23:09.403 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:23:09.403 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:23:09.403 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:23:09.403 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:23:09.403 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:23:09.403 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:23:09.403 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:23:09.403 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:23:09.403 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:23:09.403 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:23:09.403 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:23:09.403 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:23:09.661 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:23:09.661 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:23:09.661 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:23:09.661 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:23:09.661 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:23:09.661 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:23:09.661 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:23:09.661 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:23:09.661 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:23:09.661 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:23:09.661 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:23:09.661 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:23:09.661 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:23:09.661 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:23:09.661 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:23:09.661 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:23:09.919 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:23:09.919 [141/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:23:09.919 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:23:09.919 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:23:09.919 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:23:09.919 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:23:09.919 [146/203] Linking target lib/libxnvme.so 00:23:09.919 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:23:09.919 [148/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:23:09.919 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:23:09.919 [150/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:23:09.919 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:23:09.919 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:23:09.919 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:23:09.919 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:23:10.176 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:23:10.176 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:23:10.176 [157/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:23:10.176 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:23:10.176 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:23:10.176 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:23:10.176 [161/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:23:10.176 [162/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:23:10.176 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:23:10.176 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:23:10.176 [165/203] Compiling C object tools/lblk.p/lblk.c.o 00:23:10.176 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:23:10.434 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:23:10.434 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:23:10.434 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:23:10.434 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:23:10.434 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:23:10.434 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:23:10.691 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:23:10.691 [174/203] Linking static target lib/libxnvme.a 00:23:10.691 [175/203] Linking target tests/xnvme_tests_async_intf 00:23:10.691 [176/203] Linking target tests/xnvme_tests_xnvme_cli 00:23:10.691 [177/203] Linking target tests/xnvme_tests_lblk 00:23:10.691 [178/203] Linking target tests/xnvme_tests_enum 00:23:10.691 [179/203] Linking target tests/xnvme_tests_scc 00:23:10.691 [180/203] Linking target tests/xnvme_tests_ioworker 00:23:10.691 [181/203] Linking target tests/xnvme_tests_znd_explicit_open 00:23:10.691 [182/203] Linking target tests/xnvme_tests_znd_state 00:23:10.691 [183/203] Linking target tests/xnvme_tests_znd_append 00:23:10.691 [184/203] Linking target tests/xnvme_tests_kvs 00:23:10.691 [185/203] Linking target tests/xnvme_tests_znd_zrwa 00:23:10.691 [186/203] Linking target tests/xnvme_tests_buf 00:23:10.691 [187/203] Linking target tests/xnvme_tests_xnvme_file 00:23:10.691 [188/203] Linking target tests/xnvme_tests_cli 00:23:10.691 [189/203] Linking target tests/xnvme_tests_map 00:23:10.691 [190/203] Linking target tools/lblk 00:23:10.691 [191/203] Linking target tools/xdd 00:23:10.691 [192/203] Linking target tools/xnvme 00:23:10.691 [193/203] Linking target tools/zoned 00:23:10.691 [194/203] Linking target tools/xnvme_file 00:23:10.691 [195/203] Linking target tools/kvs 00:23:10.691 [196/203] Linking target examples/xnvme_hello 00:23:10.949 [197/203] Linking target examples/xnvme_dev 00:23:10.949 [198/203] Linking target examples/xnvme_enum 00:23:10.949 [199/203] Linking target examples/zoned_io_async 00:23:10.949 [200/203] Linking target examples/xnvme_io_async 00:23:10.949 [201/203] Linking target examples/zoned_io_sync 00:23:10.949 [202/203] Linking target examples/xnvme_single_async 00:23:10.949 [203/203] Linking target examples/xnvme_single_sync 00:23:10.949 INFO: autodetecting backend as ninja 00:23:10.949 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:23:10.949 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:23:19.086 The Meson build system 00:23:19.086 Version: 1.5.0 00:23:19.086 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:23:19.086 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:23:19.086 Build type: native build 00:23:19.086 Program cat found: YES (/usr/bin/cat) 00:23:19.086 Project name: DPDK 00:23:19.086 Project version: 24.03.0 00:23:19.086 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:23:19.086 C linker for the host machine: cc ld.bfd 2.40-14 00:23:19.086 Host machine cpu family: x86_64 00:23:19.086 Host machine cpu: x86_64 00:23:19.086 Message: ## Building in Developer Mode ## 00:23:19.086 Program pkg-config found: YES (/usr/bin/pkg-config) 00:23:19.086 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:23:19.086 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:23:19.086 Program python3 found: YES (/usr/bin/python3) 00:23:19.086 Program cat found: YES (/usr/bin/cat) 00:23:19.086 Compiler for C supports arguments -march=native: YES 00:23:19.086 Checking for size of "void *" : 8 00:23:19.086 Checking for size of "void *" : 8 (cached) 00:23:19.086 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:23:19.086 Library m found: YES 00:23:19.086 Library numa found: YES 00:23:19.086 Has header "numaif.h" : YES 00:23:19.086 Library fdt found: NO 00:23:19.086 Library execinfo found: NO 00:23:19.086 Has header "execinfo.h" : YES 00:23:19.086 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:23:19.086 Run-time dependency libarchive found: NO (tried pkgconfig) 00:23:19.086 Run-time dependency libbsd found: NO (tried pkgconfig) 00:23:19.086 Run-time dependency jansson found: NO (tried pkgconfig) 00:23:19.086 Run-time dependency openssl found: YES 3.1.1 00:23:19.086 Run-time dependency libpcap found: YES 1.10.4 00:23:19.086 Has header "pcap.h" with dependency libpcap: YES 00:23:19.086 Compiler for C supports arguments -Wcast-qual: YES 00:23:19.086 Compiler for C supports arguments -Wdeprecated: YES 00:23:19.086 Compiler for C supports arguments -Wformat: YES 00:23:19.086 Compiler for C supports arguments -Wformat-nonliteral: NO 00:23:19.086 Compiler for C supports arguments -Wformat-security: NO 00:23:19.086 Compiler for C supports arguments -Wmissing-declarations: YES 00:23:19.086 Compiler for C supports arguments -Wmissing-prototypes: YES 00:23:19.086 Compiler for C supports arguments -Wnested-externs: YES 00:23:19.087 Compiler for C supports arguments -Wold-style-definition: YES 00:23:19.087 Compiler for C supports arguments -Wpointer-arith: YES 00:23:19.087 Compiler for C supports arguments -Wsign-compare: YES 00:23:19.087 Compiler for C supports arguments -Wstrict-prototypes: YES 00:23:19.087 Compiler for C supports arguments -Wundef: YES 00:23:19.087 Compiler for C supports arguments -Wwrite-strings: YES 00:23:19.087 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:23:19.087 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:23:19.087 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:23:19.087 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:23:19.087 Program objdump found: YES (/usr/bin/objdump) 00:23:19.087 Compiler for C supports arguments -mavx512f: YES 00:23:19.087 Checking if "AVX512 checking" compiles: YES 00:23:19.087 Fetching value of define "__SSE4_2__" : 1 00:23:19.087 Fetching value of define "__AES__" : 1 00:23:19.087 Fetching value of define "__AVX__" : 1 00:23:19.087 Fetching value of define "__AVX2__" : 1 00:23:19.087 Fetching value of define "__AVX512BW__" : (undefined) 00:23:19.087 Fetching value of define "__AVX512CD__" : (undefined) 00:23:19.087 Fetching value of define "__AVX512DQ__" : (undefined) 00:23:19.087 Fetching value of define "__AVX512F__" : (undefined) 00:23:19.087 Fetching value of define "__AVX512VL__" : (undefined) 00:23:19.087 Fetching value of define "__PCLMUL__" : 1 00:23:19.087 Fetching value of define "__RDRND__" : 1 00:23:19.087 Fetching value of define "__RDSEED__" : 1 00:23:19.087 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:23:19.087 Fetching value of define "__znver1__" : (undefined) 00:23:19.087 Fetching value of define "__znver2__" : (undefined) 00:23:19.087 Fetching value of define "__znver3__" : (undefined) 00:23:19.087 Fetching value of define "__znver4__" : (undefined) 00:23:19.087 Library asan found: YES 00:23:19.087 Compiler for C supports arguments -Wno-format-truncation: YES 00:23:19.087 Message: lib/log: Defining dependency "log" 00:23:19.087 Message: lib/kvargs: Defining dependency "kvargs" 00:23:19.087 Message: lib/telemetry: Defining dependency "telemetry" 00:23:19.087 Library rt found: YES 00:23:19.087 Checking for function "getentropy" : NO 00:23:19.087 Message: lib/eal: Defining dependency "eal" 00:23:19.087 Message: lib/ring: Defining dependency "ring" 00:23:19.087 Message: lib/rcu: Defining dependency "rcu" 00:23:19.087 Message: lib/mempool: Defining dependency "mempool" 00:23:19.087 Message: lib/mbuf: Defining dependency "mbuf" 00:23:19.087 Fetching value of define "__PCLMUL__" : 1 (cached) 00:23:19.087 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:23:19.087 Compiler for C supports arguments -mpclmul: YES 00:23:19.087 Compiler for C supports arguments -maes: YES 00:23:19.087 Compiler for C supports arguments -mavx512f: YES (cached) 00:23:19.087 Compiler for C supports arguments -mavx512bw: YES 00:23:19.087 Compiler for C supports arguments -mavx512dq: YES 00:23:19.087 Compiler for C supports arguments -mavx512vl: YES 00:23:19.087 Compiler for C supports arguments -mvpclmulqdq: YES 00:23:19.087 Compiler for C supports arguments -mavx2: YES 00:23:19.087 Compiler for C supports arguments -mavx: YES 00:23:19.087 Message: lib/net: Defining dependency "net" 00:23:19.087 Message: lib/meter: Defining dependency "meter" 00:23:19.087 Message: lib/ethdev: Defining dependency "ethdev" 00:23:19.087 Message: lib/pci: Defining dependency "pci" 00:23:19.087 Message: lib/cmdline: Defining dependency "cmdline" 00:23:19.087 Message: lib/hash: Defining dependency "hash" 00:23:19.087 Message: lib/timer: Defining dependency "timer" 00:23:19.087 Message: lib/compressdev: Defining dependency "compressdev" 00:23:19.087 Message: lib/cryptodev: Defining dependency "cryptodev" 00:23:19.087 Message: lib/dmadev: Defining dependency "dmadev" 00:23:19.087 Compiler for C supports arguments -Wno-cast-qual: YES 00:23:19.087 Message: lib/power: Defining dependency "power" 00:23:19.087 Message: lib/reorder: Defining dependency "reorder" 00:23:19.087 Message: lib/security: Defining dependency "security" 00:23:19.087 Has header "linux/userfaultfd.h" : YES 00:23:19.087 Has header "linux/vduse.h" : YES 00:23:19.087 Message: lib/vhost: Defining dependency "vhost" 00:23:19.087 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:23:19.087 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:23:19.087 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:23:19.087 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:23:19.087 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:23:19.087 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:23:19.087 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:23:19.087 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:23:19.087 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:23:19.087 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:23:19.087 Program doxygen found: YES (/usr/local/bin/doxygen) 00:23:19.087 Configuring doxy-api-html.conf using configuration 00:23:19.087 Configuring doxy-api-man.conf using configuration 00:23:19.087 Program mandb found: YES (/usr/bin/mandb) 00:23:19.087 Program sphinx-build found: NO 00:23:19.087 Configuring rte_build_config.h using configuration 00:23:19.087 Message: 00:23:19.087 ================= 00:23:19.087 Applications Enabled 00:23:19.087 ================= 00:23:19.087 00:23:19.087 apps: 00:23:19.087 00:23:19.087 00:23:19.087 Message: 00:23:19.087 ================= 00:23:19.087 Libraries Enabled 00:23:19.087 ================= 00:23:19.087 00:23:19.087 libs: 00:23:19.087 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:23:19.087 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:23:19.087 cryptodev, dmadev, power, reorder, security, vhost, 00:23:19.087 00:23:19.087 Message: 00:23:19.087 =============== 00:23:19.087 Drivers Enabled 00:23:19.087 =============== 00:23:19.087 00:23:19.087 common: 00:23:19.087 00:23:19.087 bus: 00:23:19.087 pci, vdev, 00:23:19.087 mempool: 00:23:19.087 ring, 00:23:19.087 dma: 00:23:19.087 00:23:19.087 net: 00:23:19.087 00:23:19.087 crypto: 00:23:19.087 00:23:19.087 compress: 00:23:19.087 00:23:19.087 vdpa: 00:23:19.087 00:23:19.087 00:23:19.087 Message: 00:23:19.087 ================= 00:23:19.087 Content Skipped 00:23:19.087 ================= 00:23:19.087 00:23:19.087 apps: 00:23:19.087 dumpcap: explicitly disabled via build config 00:23:19.087 graph: explicitly disabled via build config 00:23:19.088 pdump: explicitly disabled via build config 00:23:19.088 proc-info: explicitly disabled via build config 00:23:19.088 test-acl: explicitly disabled via build config 00:23:19.088 test-bbdev: explicitly disabled via build config 00:23:19.088 test-cmdline: explicitly disabled via build config 00:23:19.088 test-compress-perf: explicitly disabled via build config 00:23:19.088 test-crypto-perf: explicitly disabled via build config 00:23:19.088 test-dma-perf: explicitly disabled via build config 00:23:19.088 test-eventdev: explicitly disabled via build config 00:23:19.088 test-fib: explicitly disabled via build config 00:23:19.088 test-flow-perf: explicitly disabled via build config 00:23:19.088 test-gpudev: explicitly disabled via build config 00:23:19.088 test-mldev: explicitly disabled via build config 00:23:19.088 test-pipeline: explicitly disabled via build config 00:23:19.088 test-pmd: explicitly disabled via build config 00:23:19.088 test-regex: explicitly disabled via build config 00:23:19.088 test-sad: explicitly disabled via build config 00:23:19.088 test-security-perf: explicitly disabled via build config 00:23:19.088 00:23:19.088 libs: 00:23:19.088 argparse: explicitly disabled via build config 00:23:19.088 metrics: explicitly disabled via build config 00:23:19.088 acl: explicitly disabled via build config 00:23:19.088 bbdev: explicitly disabled via build config 00:23:19.088 bitratestats: explicitly disabled via build config 00:23:19.088 bpf: explicitly disabled via build config 00:23:19.088 cfgfile: explicitly disabled via build config 00:23:19.088 distributor: explicitly disabled via build config 00:23:19.088 efd: explicitly disabled via build config 00:23:19.088 eventdev: explicitly disabled via build config 00:23:19.088 dispatcher: explicitly disabled via build config 00:23:19.088 gpudev: explicitly disabled via build config 00:23:19.088 gro: explicitly disabled via build config 00:23:19.088 gso: explicitly disabled via build config 00:23:19.088 ip_frag: explicitly disabled via build config 00:23:19.088 jobstats: explicitly disabled via build config 00:23:19.088 latencystats: explicitly disabled via build config 00:23:19.088 lpm: explicitly disabled via build config 00:23:19.088 member: explicitly disabled via build config 00:23:19.088 pcapng: explicitly disabled via build config 00:23:19.088 rawdev: explicitly disabled via build config 00:23:19.088 regexdev: explicitly disabled via build config 00:23:19.088 mldev: explicitly disabled via build config 00:23:19.088 rib: explicitly disabled via build config 00:23:19.088 sched: explicitly disabled via build config 00:23:19.088 stack: explicitly disabled via build config 00:23:19.088 ipsec: explicitly disabled via build config 00:23:19.088 pdcp: explicitly disabled via build config 00:23:19.088 fib: explicitly disabled via build config 00:23:19.088 port: explicitly disabled via build config 00:23:19.088 pdump: explicitly disabled via build config 00:23:19.088 table: explicitly disabled via build config 00:23:19.088 pipeline: explicitly disabled via build config 00:23:19.088 graph: explicitly disabled via build config 00:23:19.088 node: explicitly disabled via build config 00:23:19.088 00:23:19.088 drivers: 00:23:19.088 common/cpt: not in enabled drivers build config 00:23:19.088 common/dpaax: not in enabled drivers build config 00:23:19.088 common/iavf: not in enabled drivers build config 00:23:19.088 common/idpf: not in enabled drivers build config 00:23:19.088 common/ionic: not in enabled drivers build config 00:23:19.088 common/mvep: not in enabled drivers build config 00:23:19.088 common/octeontx: not in enabled drivers build config 00:23:19.088 bus/auxiliary: not in enabled drivers build config 00:23:19.088 bus/cdx: not in enabled drivers build config 00:23:19.088 bus/dpaa: not in enabled drivers build config 00:23:19.088 bus/fslmc: not in enabled drivers build config 00:23:19.088 bus/ifpga: not in enabled drivers build config 00:23:19.088 bus/platform: not in enabled drivers build config 00:23:19.088 bus/uacce: not in enabled drivers build config 00:23:19.088 bus/vmbus: not in enabled drivers build config 00:23:19.088 common/cnxk: not in enabled drivers build config 00:23:19.088 common/mlx5: not in enabled drivers build config 00:23:19.088 common/nfp: not in enabled drivers build config 00:23:19.088 common/nitrox: not in enabled drivers build config 00:23:19.088 common/qat: not in enabled drivers build config 00:23:19.088 common/sfc_efx: not in enabled drivers build config 00:23:19.088 mempool/bucket: not in enabled drivers build config 00:23:19.088 mempool/cnxk: not in enabled drivers build config 00:23:19.088 mempool/dpaa: not in enabled drivers build config 00:23:19.088 mempool/dpaa2: not in enabled drivers build config 00:23:19.088 mempool/octeontx: not in enabled drivers build config 00:23:19.088 mempool/stack: not in enabled drivers build config 00:23:19.088 dma/cnxk: not in enabled drivers build config 00:23:19.088 dma/dpaa: not in enabled drivers build config 00:23:19.088 dma/dpaa2: not in enabled drivers build config 00:23:19.088 dma/hisilicon: not in enabled drivers build config 00:23:19.088 dma/idxd: not in enabled drivers build config 00:23:19.088 dma/ioat: not in enabled drivers build config 00:23:19.088 dma/skeleton: not in enabled drivers build config 00:23:19.088 net/af_packet: not in enabled drivers build config 00:23:19.088 net/af_xdp: not in enabled drivers build config 00:23:19.088 net/ark: not in enabled drivers build config 00:23:19.088 net/atlantic: not in enabled drivers build config 00:23:19.088 net/avp: not in enabled drivers build config 00:23:19.088 net/axgbe: not in enabled drivers build config 00:23:19.088 net/bnx2x: not in enabled drivers build config 00:23:19.088 net/bnxt: not in enabled drivers build config 00:23:19.088 net/bonding: not in enabled drivers build config 00:23:19.088 net/cnxk: not in enabled drivers build config 00:23:19.088 net/cpfl: not in enabled drivers build config 00:23:19.088 net/cxgbe: not in enabled drivers build config 00:23:19.088 net/dpaa: not in enabled drivers build config 00:23:19.088 net/dpaa2: not in enabled drivers build config 00:23:19.088 net/e1000: not in enabled drivers build config 00:23:19.088 net/ena: not in enabled drivers build config 00:23:19.088 net/enetc: not in enabled drivers build config 00:23:19.088 net/enetfec: not in enabled drivers build config 00:23:19.088 net/enic: not in enabled drivers build config 00:23:19.088 net/failsafe: not in enabled drivers build config 00:23:19.088 net/fm10k: not in enabled drivers build config 00:23:19.088 net/gve: not in enabled drivers build config 00:23:19.088 net/hinic: not in enabled drivers build config 00:23:19.088 net/hns3: not in enabled drivers build config 00:23:19.088 net/i40e: not in enabled drivers build config 00:23:19.088 net/iavf: not in enabled drivers build config 00:23:19.088 net/ice: not in enabled drivers build config 00:23:19.088 net/idpf: not in enabled drivers build config 00:23:19.088 net/igc: not in enabled drivers build config 00:23:19.088 net/ionic: not in enabled drivers build config 00:23:19.088 net/ipn3ke: not in enabled drivers build config 00:23:19.088 net/ixgbe: not in enabled drivers build config 00:23:19.088 net/mana: not in enabled drivers build config 00:23:19.088 net/memif: not in enabled drivers build config 00:23:19.088 net/mlx4: not in enabled drivers build config 00:23:19.088 net/mlx5: not in enabled drivers build config 00:23:19.088 net/mvneta: not in enabled drivers build config 00:23:19.088 net/mvpp2: not in enabled drivers build config 00:23:19.088 net/netvsc: not in enabled drivers build config 00:23:19.088 net/nfb: not in enabled drivers build config 00:23:19.088 net/nfp: not in enabled drivers build config 00:23:19.088 net/ngbe: not in enabled drivers build config 00:23:19.088 net/null: not in enabled drivers build config 00:23:19.088 net/octeontx: not in enabled drivers build config 00:23:19.088 net/octeon_ep: not in enabled drivers build config 00:23:19.088 net/pcap: not in enabled drivers build config 00:23:19.089 net/pfe: not in enabled drivers build config 00:23:19.089 net/qede: not in enabled drivers build config 00:23:19.089 net/ring: not in enabled drivers build config 00:23:19.089 net/sfc: not in enabled drivers build config 00:23:19.089 net/softnic: not in enabled drivers build config 00:23:19.089 net/tap: not in enabled drivers build config 00:23:19.089 net/thunderx: not in enabled drivers build config 00:23:19.089 net/txgbe: not in enabled drivers build config 00:23:19.089 net/vdev_netvsc: not in enabled drivers build config 00:23:19.089 net/vhost: not in enabled drivers build config 00:23:19.089 net/virtio: not in enabled drivers build config 00:23:19.089 net/vmxnet3: not in enabled drivers build config 00:23:19.089 raw/*: missing internal dependency, "rawdev" 00:23:19.089 crypto/armv8: not in enabled drivers build config 00:23:19.089 crypto/bcmfs: not in enabled drivers build config 00:23:19.089 crypto/caam_jr: not in enabled drivers build config 00:23:19.089 crypto/ccp: not in enabled drivers build config 00:23:19.089 crypto/cnxk: not in enabled drivers build config 00:23:19.089 crypto/dpaa_sec: not in enabled drivers build config 00:23:19.089 crypto/dpaa2_sec: not in enabled drivers build config 00:23:19.089 crypto/ipsec_mb: not in enabled drivers build config 00:23:19.089 crypto/mlx5: not in enabled drivers build config 00:23:19.089 crypto/mvsam: not in enabled drivers build config 00:23:19.089 crypto/nitrox: not in enabled drivers build config 00:23:19.089 crypto/null: not in enabled drivers build config 00:23:19.089 crypto/octeontx: not in enabled drivers build config 00:23:19.089 crypto/openssl: not in enabled drivers build config 00:23:19.089 crypto/scheduler: not in enabled drivers build config 00:23:19.089 crypto/uadk: not in enabled drivers build config 00:23:19.089 crypto/virtio: not in enabled drivers build config 00:23:19.089 compress/isal: not in enabled drivers build config 00:23:19.089 compress/mlx5: not in enabled drivers build config 00:23:19.089 compress/nitrox: not in enabled drivers build config 00:23:19.089 compress/octeontx: not in enabled drivers build config 00:23:19.089 compress/zlib: not in enabled drivers build config 00:23:19.089 regex/*: missing internal dependency, "regexdev" 00:23:19.089 ml/*: missing internal dependency, "mldev" 00:23:19.089 vdpa/ifc: not in enabled drivers build config 00:23:19.089 vdpa/mlx5: not in enabled drivers build config 00:23:19.089 vdpa/nfp: not in enabled drivers build config 00:23:19.089 vdpa/sfc: not in enabled drivers build config 00:23:19.089 event/*: missing internal dependency, "eventdev" 00:23:19.089 baseband/*: missing internal dependency, "bbdev" 00:23:19.089 gpu/*: missing internal dependency, "gpudev" 00:23:19.089 00:23:19.089 00:23:19.089 Build targets in project: 85 00:23:19.089 00:23:19.089 DPDK 24.03.0 00:23:19.089 00:23:19.089 User defined options 00:23:19.089 buildtype : debug 00:23:19.089 default_library : shared 00:23:19.089 libdir : lib 00:23:19.089 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:19.089 b_sanitize : address 00:23:19.089 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:23:19.089 c_link_args : 00:23:19.089 cpu_instruction_set: native 00:23:19.089 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:23:19.089 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:23:19.089 enable_docs : false 00:23:19.089 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:23:19.089 enable_kmods : false 00:23:19.089 max_lcores : 128 00:23:19.089 tests : false 00:23:19.089 00:23:19.089 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:19.089 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:23:19.347 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:23:19.347 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:23:19.347 [3/268] Linking static target lib/librte_kvargs.a 00:23:19.347 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:23:19.347 [5/268] Linking static target lib/librte_log.a 00:23:19.347 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:23:19.915 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:23:19.915 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:23:19.915 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:23:20.174 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:23:20.174 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:23:20.174 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:23:20.174 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:23:20.174 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:23:20.174 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:23:20.431 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:23:20.431 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:23:20.431 [18/268] Linking static target lib/librte_telemetry.a 00:23:20.431 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:23:20.431 [20/268] Linking target lib/librte_log.so.24.1 00:23:20.689 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:23:20.689 [22/268] Linking target lib/librte_kvargs.so.24.1 00:23:20.948 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:23:20.948 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:23:20.948 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:23:21.207 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:23:21.207 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:23:21.207 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:23:21.207 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:23:21.207 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:23:21.465 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:23:21.465 [32/268] Linking target lib/librte_telemetry.so.24.1 00:23:21.465 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:23:21.465 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:23:21.723 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:23:21.723 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:23:21.723 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:23:21.980 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:23:21.980 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:23:22.239 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:23:22.239 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:23:22.239 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:23:22.239 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:23:22.239 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:23:22.497 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:23:22.497 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:23:22.497 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:23:22.754 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:23:22.754 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:23:22.754 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:23:23.012 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:23:23.012 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:23:23.271 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:23:23.271 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:23:23.530 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:23:23.530 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:23:23.530 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:23:23.530 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:23:23.530 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:23:23.530 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:23:23.787 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:23:23.787 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:23:24.045 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:23:24.045 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:23:24.303 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:23:24.303 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:23:24.561 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:23:24.561 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:23:24.561 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:23:24.561 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:23:24.561 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:23:24.819 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:23:24.819 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:23:24.819 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:23:24.819 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:23:24.819 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:23:25.076 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:23:25.076 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:23:25.335 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:23:25.335 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:23:25.335 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:23:25.335 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:23:25.593 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:23:25.593 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:23:25.593 [85/268] Linking static target lib/librte_ring.a 00:23:25.851 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:23:25.851 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:23:25.851 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:23:25.851 [89/268] Linking static target lib/librte_rcu.a 00:23:25.851 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:23:25.851 [91/268] Linking static target lib/librte_eal.a 00:23:26.109 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:23:26.109 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:23:26.109 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:23:26.368 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:23:26.368 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:23:26.368 [97/268] Linking static target lib/librte_mempool.a 00:23:26.368 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:23:26.368 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:23:26.368 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:23:26.626 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:23:26.885 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:23:26.885 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:23:26.885 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:23:26.885 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:23:26.885 [106/268] Linking static target lib/librte_mbuf.a 00:23:27.144 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:23:27.144 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:23:27.144 [109/268] Linking static target lib/librte_net.a 00:23:27.144 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:23:27.144 [111/268] Linking static target lib/librte_meter.a 00:23:27.709 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:23:27.709 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:23:27.709 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:23:27.709 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:23:27.709 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:23:27.709 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:23:27.709 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:23:27.967 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:23:28.226 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:23:28.484 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:23:28.743 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:23:28.743 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:23:28.743 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:23:28.743 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:23:28.743 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:23:29.000 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:23:29.000 [128/268] Linking static target lib/librte_pci.a 00:23:29.000 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:23:29.000 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:23:29.000 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:23:29.259 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:29.259 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:23:29.259 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:23:29.523 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:23:29.523 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:23:29.523 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:23:29.523 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:23:29.523 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:23:29.523 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:23:29.523 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:23:29.810 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:23:29.810 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:23:29.810 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:23:29.810 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:23:29.810 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:23:29.810 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:23:29.810 [148/268] Linking static target lib/librte_cmdline.a 00:23:30.068 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:23:30.327 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:23:30.585 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:23:30.585 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:23:30.585 [153/268] Linking static target lib/librte_timer.a 00:23:30.585 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:23:30.844 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:23:30.844 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:23:30.844 [157/268] Linking static target lib/librte_ethdev.a 00:23:31.101 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:23:31.101 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:23:31.101 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:23:31.101 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:23:31.101 [162/268] Linking static target lib/librte_compressdev.a 00:23:31.359 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:23:31.359 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:23:31.359 [165/268] Linking static target lib/librte_hash.a 00:23:31.617 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:23:31.617 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:23:31.617 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:23:31.617 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:23:31.617 [170/268] Linking static target lib/librte_dmadev.a 00:23:31.875 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:23:31.875 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:23:32.134 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:23:32.391 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:32.391 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:23:32.391 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:23:32.649 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:23:32.649 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:32.649 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:23:32.907 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:23:32.907 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:23:32.907 [182/268] Linking static target lib/librte_cryptodev.a 00:23:32.907 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:23:32.907 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:23:33.166 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:23:33.166 [186/268] Linking static target lib/librte_power.a 00:23:33.423 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:23:33.681 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:23:33.681 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:23:33.681 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:23:33.681 [191/268] Linking static target lib/librte_security.a 00:23:33.940 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:23:33.940 [193/268] Linking static target lib/librte_reorder.a 00:23:34.199 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:23:34.457 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:23:34.457 [196/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:23:34.457 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:23:34.715 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:23:34.974 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:23:34.974 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:23:35.234 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:23:35.234 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:23:35.531 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:35.531 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:23:35.531 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:23:35.789 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:23:35.789 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:23:36.051 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:23:36.052 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:23:36.052 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:23:36.052 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:23:36.311 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:23:36.311 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:23:36.311 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:36.311 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:36.311 [216/268] Linking static target drivers/librte_bus_vdev.a 00:23:36.311 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:36.311 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:36.311 [219/268] Linking static target drivers/librte_bus_pci.a 00:23:36.311 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:23:36.311 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:23:36.569 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:23:36.569 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:36.569 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:36.827 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:36.827 [226/268] Linking static target drivers/librte_mempool_ring.a 00:23:37.085 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:37.652 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:23:37.910 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:23:38.169 [230/268] Linking target lib/librte_eal.so.24.1 00:23:38.169 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:23:38.427 [232/268] Linking target lib/librte_ring.so.24.1 00:23:38.427 [233/268] Linking target lib/librte_meter.so.24.1 00:23:38.427 [234/268] Linking target lib/librte_pci.so.24.1 00:23:38.427 [235/268] Linking target lib/librte_timer.so.24.1 00:23:38.427 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:23:38.427 [237/268] Linking target lib/librte_dmadev.so.24.1 00:23:38.427 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:23:38.427 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:23:38.427 [240/268] Linking target lib/librte_rcu.so.24.1 00:23:38.427 [241/268] Linking target lib/librte_mempool.so.24.1 00:23:38.427 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:23:38.427 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:23:38.427 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:23:38.685 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:23:38.685 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:23:38.685 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:23:38.685 [248/268] Linking target lib/librte_mbuf.so.24.1 00:23:38.685 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:23:38.942 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:23:38.942 [251/268] Linking target lib/librte_net.so.24.1 00:23:38.942 [252/268] Linking target lib/librte_compressdev.so.24.1 00:23:38.942 [253/268] Linking target lib/librte_reorder.so.24.1 00:23:38.942 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:23:38.942 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:23:39.200 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:23:39.200 [257/268] Linking target lib/librte_cmdline.so.24.1 00:23:39.200 [258/268] Linking target lib/librte_hash.so.24.1 00:23:39.200 [259/268] Linking target lib/librte_security.so.24.1 00:23:39.200 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:23:39.457 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:39.457 [262/268] Linking target lib/librte_ethdev.so.24.1 00:23:39.715 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:23:39.715 [264/268] Linking target lib/librte_power.so.24.1 00:23:42.244 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:23:42.244 [266/268] Linking static target lib/librte_vhost.a 00:23:43.617 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:23:43.617 [268/268] Linking target lib/librte_vhost.so.24.1 00:23:43.617 INFO: autodetecting backend as ninja 00:23:43.617 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:24:05.548 CC lib/ut_mock/mock.o 00:24:05.548 CC lib/ut/ut.o 00:24:05.548 CC lib/log/log.o 00:24:05.548 CC lib/log/log_flags.o 00:24:05.548 CC lib/log/log_deprecated.o 00:24:05.548 LIB libspdk_log.a 00:24:05.548 LIB libspdk_ut.a 00:24:05.548 LIB libspdk_ut_mock.a 00:24:05.548 SO libspdk_ut.so.2.0 00:24:05.548 SO libspdk_log.so.7.0 00:24:05.548 SO libspdk_ut_mock.so.6.0 00:24:05.548 SYMLINK libspdk_ut.so 00:24:05.548 SYMLINK libspdk_log.so 00:24:05.548 SYMLINK libspdk_ut_mock.so 00:24:05.548 CC lib/ioat/ioat.o 00:24:05.548 CC lib/dma/dma.o 00:24:05.548 CXX lib/trace_parser/trace.o 00:24:05.548 CC lib/util/base64.o 00:24:05.548 CC lib/util/bit_array.o 00:24:05.548 CC lib/util/cpuset.o 00:24:05.548 CC lib/util/crc16.o 00:24:05.548 CC lib/util/crc32.o 00:24:05.548 CC lib/util/crc32c.o 00:24:05.548 CC lib/vfio_user/host/vfio_user_pci.o 00:24:05.548 CC lib/util/crc32_ieee.o 00:24:05.548 CC lib/util/crc64.o 00:24:05.548 CC lib/util/dif.o 00:24:05.548 CC lib/util/fd.o 00:24:05.807 LIB libspdk_dma.a 00:24:05.807 CC lib/vfio_user/host/vfio_user.o 00:24:05.807 LIB libspdk_ioat.a 00:24:05.807 SO libspdk_dma.so.5.0 00:24:05.807 CC lib/util/fd_group.o 00:24:05.807 SO libspdk_ioat.so.7.0 00:24:05.807 CC lib/util/file.o 00:24:05.807 CC lib/util/hexlify.o 00:24:05.807 SYMLINK libspdk_dma.so 00:24:05.807 CC lib/util/iov.o 00:24:05.807 SYMLINK libspdk_ioat.so 00:24:05.807 CC lib/util/math.o 00:24:05.807 CC lib/util/net.o 00:24:05.807 CC lib/util/pipe.o 00:24:06.065 LIB libspdk_vfio_user.a 00:24:06.065 CC lib/util/strerror_tls.o 00:24:06.065 SO libspdk_vfio_user.so.5.0 00:24:06.065 CC lib/util/string.o 00:24:06.065 CC lib/util/uuid.o 00:24:06.065 CC lib/util/xor.o 00:24:06.065 SYMLINK libspdk_vfio_user.so 00:24:06.065 CC lib/util/zipf.o 00:24:06.065 CC lib/util/md5.o 00:24:06.325 LIB libspdk_util.a 00:24:06.583 SO libspdk_util.so.10.0 00:24:06.841 SYMLINK libspdk_util.so 00:24:06.841 LIB libspdk_trace_parser.a 00:24:06.841 SO libspdk_trace_parser.so.6.0 00:24:06.841 SYMLINK libspdk_trace_parser.so 00:24:06.841 CC lib/idxd/idxd_user.o 00:24:06.841 CC lib/idxd/idxd.o 00:24:06.841 CC lib/conf/conf.o 00:24:06.841 CC lib/idxd/idxd_kernel.o 00:24:06.841 CC lib/json/json_parse.o 00:24:06.841 CC lib/json/json_util.o 00:24:06.841 CC lib/rdma_provider/common.o 00:24:06.841 CC lib/vmd/vmd.o 00:24:06.841 CC lib/env_dpdk/env.o 00:24:06.841 CC lib/rdma_utils/rdma_utils.o 00:24:07.100 CC lib/env_dpdk/memory.o 00:24:07.100 CC lib/rdma_provider/rdma_provider_verbs.o 00:24:07.100 CC lib/json/json_write.o 00:24:07.100 CC lib/env_dpdk/pci.o 00:24:07.358 CC lib/env_dpdk/init.o 00:24:07.358 LIB libspdk_conf.a 00:24:07.358 SO libspdk_conf.so.6.0 00:24:07.358 LIB libspdk_rdma_utils.a 00:24:07.358 SO libspdk_rdma_utils.so.1.0 00:24:07.358 SYMLINK libspdk_conf.so 00:24:07.358 CC lib/vmd/led.o 00:24:07.358 SYMLINK libspdk_rdma_utils.so 00:24:07.358 CC lib/env_dpdk/threads.o 00:24:07.358 LIB libspdk_rdma_provider.a 00:24:07.358 SO libspdk_rdma_provider.so.6.0 00:24:07.616 SYMLINK libspdk_rdma_provider.so 00:24:07.616 CC lib/env_dpdk/pci_ioat.o 00:24:07.616 CC lib/env_dpdk/pci_virtio.o 00:24:07.616 CC lib/env_dpdk/pci_vmd.o 00:24:07.616 LIB libspdk_json.a 00:24:07.616 SO libspdk_json.so.6.0 00:24:07.616 CC lib/env_dpdk/pci_idxd.o 00:24:07.616 CC lib/env_dpdk/pci_event.o 00:24:07.616 CC lib/env_dpdk/sigbus_handler.o 00:24:07.616 SYMLINK libspdk_json.so 00:24:07.616 CC lib/env_dpdk/pci_dpdk.o 00:24:07.616 CC lib/env_dpdk/pci_dpdk_2207.o 00:24:07.874 CC lib/env_dpdk/pci_dpdk_2211.o 00:24:07.874 LIB libspdk_idxd.a 00:24:07.874 SO libspdk_idxd.so.12.1 00:24:07.874 LIB libspdk_vmd.a 00:24:07.874 CC lib/jsonrpc/jsonrpc_server.o 00:24:07.874 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:24:07.874 CC lib/jsonrpc/jsonrpc_client.o 00:24:07.874 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:24:07.874 SO libspdk_vmd.so.6.0 00:24:07.875 SYMLINK libspdk_idxd.so 00:24:07.875 SYMLINK libspdk_vmd.so 00:24:08.133 LIB libspdk_jsonrpc.a 00:24:08.391 SO libspdk_jsonrpc.so.6.0 00:24:08.391 SYMLINK libspdk_jsonrpc.so 00:24:08.649 CC lib/rpc/rpc.o 00:24:08.908 LIB libspdk_rpc.a 00:24:08.908 SO libspdk_rpc.so.6.0 00:24:08.908 LIB libspdk_env_dpdk.a 00:24:08.908 SYMLINK libspdk_rpc.so 00:24:09.165 SO libspdk_env_dpdk.so.15.0 00:24:09.165 CC lib/notify/notify.o 00:24:09.165 CC lib/notify/notify_rpc.o 00:24:09.165 CC lib/trace/trace_flags.o 00:24:09.165 CC lib/trace/trace.o 00:24:09.165 CC lib/trace/trace_rpc.o 00:24:09.165 CC lib/keyring/keyring.o 00:24:09.165 CC lib/keyring/keyring_rpc.o 00:24:09.165 SYMLINK libspdk_env_dpdk.so 00:24:09.424 LIB libspdk_notify.a 00:24:09.424 SO libspdk_notify.so.6.0 00:24:09.424 SYMLINK libspdk_notify.so 00:24:09.424 LIB libspdk_keyring.a 00:24:09.424 LIB libspdk_trace.a 00:24:09.424 SO libspdk_keyring.so.2.0 00:24:09.424 SO libspdk_trace.so.11.0 00:24:09.681 SYMLINK libspdk_keyring.so 00:24:09.681 SYMLINK libspdk_trace.so 00:24:09.939 CC lib/sock/sock_rpc.o 00:24:09.939 CC lib/thread/iobuf.o 00:24:09.939 CC lib/sock/sock.o 00:24:09.939 CC lib/thread/thread.o 00:24:10.526 LIB libspdk_sock.a 00:24:10.526 SO libspdk_sock.so.10.0 00:24:10.526 SYMLINK libspdk_sock.so 00:24:10.784 CC lib/nvme/nvme_ctrlr.o 00:24:10.784 CC lib/nvme/nvme_ctrlr_cmd.o 00:24:10.784 CC lib/nvme/nvme_fabric.o 00:24:10.784 CC lib/nvme/nvme_ns_cmd.o 00:24:10.784 CC lib/nvme/nvme_pcie_common.o 00:24:10.784 CC lib/nvme/nvme_ns.o 00:24:10.784 CC lib/nvme/nvme_pcie.o 00:24:10.784 CC lib/nvme/nvme.o 00:24:10.784 CC lib/nvme/nvme_qpair.o 00:24:11.718 CC lib/nvme/nvme_quirks.o 00:24:11.718 CC lib/nvme/nvme_transport.o 00:24:11.718 CC lib/nvme/nvme_discovery.o 00:24:11.976 LIB libspdk_thread.a 00:24:11.976 SO libspdk_thread.so.10.2 00:24:11.976 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:24:11.976 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:24:11.976 CC lib/nvme/nvme_tcp.o 00:24:11.976 SYMLINK libspdk_thread.so 00:24:11.976 CC lib/nvme/nvme_opal.o 00:24:11.976 CC lib/nvme/nvme_io_msg.o 00:24:12.234 CC lib/accel/accel.o 00:24:12.492 CC lib/blob/blobstore.o 00:24:12.492 CC lib/accel/accel_rpc.o 00:24:12.752 CC lib/nvme/nvme_poll_group.o 00:24:12.752 CC lib/blob/request.o 00:24:12.752 CC lib/nvme/nvme_zns.o 00:24:12.752 CC lib/accel/accel_sw.o 00:24:12.752 CC lib/init/json_config.o 00:24:13.015 CC lib/nvme/nvme_stubs.o 00:24:13.273 CC lib/nvme/nvme_auth.o 00:24:13.273 CC lib/init/subsystem.o 00:24:13.273 CC lib/nvme/nvme_cuse.o 00:24:13.273 CC lib/blob/zeroes.o 00:24:13.531 CC lib/init/subsystem_rpc.o 00:24:13.531 CC lib/init/rpc.o 00:24:13.531 CC lib/blob/blob_bs_dev.o 00:24:13.531 CC lib/nvme/nvme_rdma.o 00:24:13.790 LIB libspdk_init.a 00:24:13.790 SO libspdk_init.so.6.0 00:24:13.790 LIB libspdk_accel.a 00:24:13.790 CC lib/virtio/virtio.o 00:24:13.790 CC lib/virtio/virtio_vhost_user.o 00:24:13.790 SO libspdk_accel.so.16.0 00:24:13.790 SYMLINK libspdk_init.so 00:24:14.048 SYMLINK libspdk_accel.so 00:24:14.048 CC lib/virtio/virtio_vfio_user.o 00:24:14.048 CC lib/fsdev/fsdev.o 00:24:14.048 CC lib/fsdev/fsdev_io.o 00:24:14.306 CC lib/event/app.o 00:24:14.306 CC lib/event/reactor.o 00:24:14.306 CC lib/event/log_rpc.o 00:24:14.306 CC lib/virtio/virtio_pci.o 00:24:14.306 CC lib/fsdev/fsdev_rpc.o 00:24:14.306 CC lib/event/app_rpc.o 00:24:14.563 CC lib/event/scheduler_static.o 00:24:14.563 CC lib/bdev/bdev.o 00:24:14.563 CC lib/bdev/bdev_zone.o 00:24:14.563 CC lib/bdev/bdev_rpc.o 00:24:14.821 CC lib/bdev/part.o 00:24:14.822 CC lib/bdev/scsi_nvme.o 00:24:14.822 LIB libspdk_event.a 00:24:14.822 LIB libspdk_fsdev.a 00:24:14.822 SO libspdk_event.so.15.0 00:24:14.822 LIB libspdk_virtio.a 00:24:15.136 SO libspdk_fsdev.so.1.0 00:24:15.136 SO libspdk_virtio.so.7.0 00:24:15.136 SYMLINK libspdk_event.so 00:24:15.136 SYMLINK libspdk_fsdev.so 00:24:15.136 SYMLINK libspdk_virtio.so 00:24:15.395 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:24:15.395 LIB libspdk_nvme.a 00:24:15.654 SO libspdk_nvme.so.14.0 00:24:15.911 SYMLINK libspdk_nvme.so 00:24:16.169 LIB libspdk_fuse_dispatcher.a 00:24:16.169 SO libspdk_fuse_dispatcher.so.1.0 00:24:16.169 SYMLINK libspdk_fuse_dispatcher.so 00:24:17.103 LIB libspdk_blob.a 00:24:17.362 SO libspdk_blob.so.11.0 00:24:17.362 SYMLINK libspdk_blob.so 00:24:17.620 CC lib/lvol/lvol.o 00:24:17.620 CC lib/blobfs/blobfs.o 00:24:17.620 CC lib/blobfs/tree.o 00:24:18.993 LIB libspdk_bdev.a 00:24:18.993 SO libspdk_bdev.so.17.0 00:24:18.993 SYMLINK libspdk_bdev.so 00:24:18.993 LIB libspdk_blobfs.a 00:24:18.993 LIB libspdk_lvol.a 00:24:18.993 SO libspdk_blobfs.so.10.0 00:24:18.993 SO libspdk_lvol.so.10.0 00:24:19.251 SYMLINK libspdk_blobfs.so 00:24:19.251 CC lib/nvmf/ctrlr.o 00:24:19.251 CC lib/nvmf/ctrlr_discovery.o 00:24:19.251 CC lib/nvmf/ctrlr_bdev.o 00:24:19.251 CC lib/nvmf/subsystem.o 00:24:19.251 CC lib/nvmf/nvmf.o 00:24:19.251 CC lib/nbd/nbd.o 00:24:19.251 SYMLINK libspdk_lvol.so 00:24:19.251 CC lib/nvmf/nvmf_rpc.o 00:24:19.251 CC lib/scsi/dev.o 00:24:19.251 CC lib/ftl/ftl_core.o 00:24:19.251 CC lib/ublk/ublk.o 00:24:19.816 CC lib/scsi/lun.o 00:24:19.816 CC lib/nbd/nbd_rpc.o 00:24:20.125 LIB libspdk_nbd.a 00:24:20.125 CC lib/nvmf/transport.o 00:24:20.125 SO libspdk_nbd.so.7.0 00:24:20.125 CC lib/ftl/ftl_init.o 00:24:20.125 SYMLINK libspdk_nbd.so 00:24:20.125 CC lib/ftl/ftl_layout.o 00:24:20.125 CC lib/nvmf/tcp.o 00:24:20.125 CC lib/ublk/ublk_rpc.o 00:24:20.125 CC lib/scsi/port.o 00:24:20.384 CC lib/nvmf/stubs.o 00:24:20.384 LIB libspdk_ublk.a 00:24:20.384 CC lib/scsi/scsi.o 00:24:20.384 CC lib/nvmf/mdns_server.o 00:24:20.384 CC lib/scsi/scsi_bdev.o 00:24:20.384 SO libspdk_ublk.so.3.0 00:24:20.384 SYMLINK libspdk_ublk.so 00:24:20.384 CC lib/ftl/ftl_debug.o 00:24:20.384 CC lib/scsi/scsi_pr.o 00:24:20.642 CC lib/ftl/ftl_io.o 00:24:20.900 CC lib/nvmf/rdma.o 00:24:20.900 CC lib/nvmf/auth.o 00:24:20.900 CC lib/ftl/ftl_sb.o 00:24:20.900 CC lib/scsi/scsi_rpc.o 00:24:20.900 CC lib/ftl/ftl_l2p.o 00:24:20.900 CC lib/ftl/ftl_l2p_flat.o 00:24:21.158 CC lib/scsi/task.o 00:24:21.158 CC lib/ftl/ftl_nv_cache.o 00:24:21.158 CC lib/ftl/ftl_band.o 00:24:21.158 CC lib/ftl/ftl_band_ops.o 00:24:21.158 CC lib/ftl/ftl_writer.o 00:24:21.158 CC lib/ftl/ftl_rq.o 00:24:21.416 LIB libspdk_scsi.a 00:24:21.416 SO libspdk_scsi.so.9.0 00:24:21.416 CC lib/ftl/ftl_reloc.o 00:24:21.675 SYMLINK libspdk_scsi.so 00:24:21.675 CC lib/ftl/ftl_l2p_cache.o 00:24:21.675 CC lib/ftl/ftl_p2l.o 00:24:21.675 CC lib/ftl/ftl_p2l_log.o 00:24:21.675 CC lib/ftl/mngt/ftl_mngt.o 00:24:21.932 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:24:21.932 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:24:21.932 CC lib/ftl/mngt/ftl_mngt_startup.o 00:24:22.246 CC lib/ftl/mngt/ftl_mngt_md.o 00:24:22.246 CC lib/ftl/mngt/ftl_mngt_misc.o 00:24:22.246 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:24:22.246 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:24:22.246 CC lib/ftl/mngt/ftl_mngt_band.o 00:24:22.246 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:24:22.504 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:24:22.504 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:24:22.504 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:24:22.504 CC lib/ftl/utils/ftl_conf.o 00:24:22.504 CC lib/ftl/utils/ftl_md.o 00:24:22.504 CC lib/ftl/utils/ftl_mempool.o 00:24:22.504 CC lib/ftl/utils/ftl_bitmap.o 00:24:22.762 CC lib/ftl/utils/ftl_property.o 00:24:22.762 CC lib/iscsi/conn.o 00:24:22.762 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:24:22.762 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:24:22.762 CC lib/vhost/vhost.o 00:24:22.762 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:24:22.762 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:24:23.020 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:24:23.020 CC lib/vhost/vhost_rpc.o 00:24:23.020 CC lib/vhost/vhost_scsi.o 00:24:23.020 CC lib/vhost/vhost_blk.o 00:24:23.020 CC lib/vhost/rte_vhost_user.o 00:24:23.020 CC lib/iscsi/init_grp.o 00:24:23.277 CC lib/iscsi/iscsi.o 00:24:23.277 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:24:23.535 CC lib/iscsi/param.o 00:24:23.535 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:24:23.535 CC lib/iscsi/portal_grp.o 00:24:23.535 CC lib/iscsi/tgt_node.o 00:24:23.793 CC lib/ftl/upgrade/ftl_sb_v3.o 00:24:23.793 CC lib/ftl/upgrade/ftl_sb_v5.o 00:24:23.793 CC lib/ftl/nvc/ftl_nvc_dev.o 00:24:23.793 CC lib/iscsi/iscsi_subsystem.o 00:24:24.051 CC lib/iscsi/iscsi_rpc.o 00:24:24.051 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:24:24.051 CC lib/iscsi/task.o 00:24:24.051 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:24:24.051 LIB libspdk_nvmf.a 00:24:24.310 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:24:24.310 CC lib/ftl/base/ftl_base_dev.o 00:24:24.310 SO libspdk_nvmf.so.19.0 00:24:24.310 CC lib/ftl/base/ftl_base_bdev.o 00:24:24.310 CC lib/ftl/ftl_trace.o 00:24:24.310 LIB libspdk_vhost.a 00:24:24.568 SO libspdk_vhost.so.8.0 00:24:24.568 SYMLINK libspdk_nvmf.so 00:24:24.568 SYMLINK libspdk_vhost.so 00:24:24.568 LIB libspdk_ftl.a 00:24:24.827 SO libspdk_ftl.so.9.0 00:24:25.086 LIB libspdk_iscsi.a 00:24:25.344 SO libspdk_iscsi.so.8.0 00:24:25.344 SYMLINK libspdk_ftl.so 00:24:25.344 SYMLINK libspdk_iscsi.so 00:24:25.911 CC module/env_dpdk/env_dpdk_rpc.o 00:24:25.911 CC module/accel/error/accel_error.o 00:24:25.911 CC module/keyring/file/keyring.o 00:24:25.911 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:24:25.911 CC module/accel/ioat/accel_ioat.o 00:24:25.911 CC module/scheduler/gscheduler/gscheduler.o 00:24:25.911 CC module/blob/bdev/blob_bdev.o 00:24:25.911 CC module/fsdev/aio/fsdev_aio.o 00:24:25.911 CC module/scheduler/dynamic/scheduler_dynamic.o 00:24:25.911 CC module/sock/posix/posix.o 00:24:25.911 LIB libspdk_env_dpdk_rpc.a 00:24:25.911 SO libspdk_env_dpdk_rpc.so.6.0 00:24:26.169 SYMLINK libspdk_env_dpdk_rpc.so 00:24:26.169 CC module/keyring/file/keyring_rpc.o 00:24:26.169 CC module/accel/error/accel_error_rpc.o 00:24:26.169 CC module/accel/ioat/accel_ioat_rpc.o 00:24:26.169 LIB libspdk_scheduler_gscheduler.a 00:24:26.169 LIB libspdk_scheduler_dpdk_governor.a 00:24:26.169 SO libspdk_scheduler_gscheduler.so.4.0 00:24:26.169 SO libspdk_scheduler_dpdk_governor.so.4.0 00:24:26.169 SYMLINK libspdk_scheduler_gscheduler.so 00:24:26.169 LIB libspdk_keyring_file.a 00:24:26.169 SYMLINK libspdk_scheduler_dpdk_governor.so 00:24:26.169 CC module/keyring/linux/keyring.o 00:24:26.169 LIB libspdk_accel_error.a 00:24:26.169 SO libspdk_keyring_file.so.2.0 00:24:26.428 LIB libspdk_scheduler_dynamic.a 00:24:26.428 SO libspdk_accel_error.so.2.0 00:24:26.428 LIB libspdk_accel_ioat.a 00:24:26.428 SO libspdk_scheduler_dynamic.so.4.0 00:24:26.428 SO libspdk_accel_ioat.so.6.0 00:24:26.428 SYMLINK libspdk_keyring_file.so 00:24:26.428 CC module/keyring/linux/keyring_rpc.o 00:24:26.428 SYMLINK libspdk_scheduler_dynamic.so 00:24:26.428 SYMLINK libspdk_accel_error.so 00:24:26.428 CC module/fsdev/aio/fsdev_aio_rpc.o 00:24:26.428 CC module/fsdev/aio/linux_aio_mgr.o 00:24:26.428 CC module/accel/iaa/accel_iaa.o 00:24:26.428 LIB libspdk_blob_bdev.a 00:24:26.428 CC module/accel/iaa/accel_iaa_rpc.o 00:24:26.428 CC module/accel/dsa/accel_dsa.o 00:24:26.428 SYMLINK libspdk_accel_ioat.so 00:24:26.428 CC module/accel/dsa/accel_dsa_rpc.o 00:24:26.428 SO libspdk_blob_bdev.so.11.0 00:24:26.686 LIB libspdk_keyring_linux.a 00:24:26.686 SYMLINK libspdk_blob_bdev.so 00:24:26.686 SO libspdk_keyring_linux.so.1.0 00:24:26.686 SYMLINK libspdk_keyring_linux.so 00:24:26.686 LIB libspdk_accel_iaa.a 00:24:26.686 SO libspdk_accel_iaa.so.3.0 00:24:26.964 SYMLINK libspdk_accel_iaa.so 00:24:26.964 CC module/bdev/gpt/gpt.o 00:24:26.964 CC module/bdev/error/vbdev_error.o 00:24:26.964 CC module/blobfs/bdev/blobfs_bdev.o 00:24:26.964 CC module/bdev/malloc/bdev_malloc.o 00:24:26.964 CC module/bdev/lvol/vbdev_lvol.o 00:24:26.964 CC module/bdev/delay/vbdev_delay.o 00:24:26.964 LIB libspdk_accel_dsa.a 00:24:26.964 SO libspdk_accel_dsa.so.5.0 00:24:26.964 LIB libspdk_fsdev_aio.a 00:24:26.964 SO libspdk_fsdev_aio.so.1.0 00:24:26.964 SYMLINK libspdk_accel_dsa.so 00:24:26.964 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:24:26.964 LIB libspdk_sock_posix.a 00:24:26.964 CC module/bdev/null/bdev_null.o 00:24:26.964 SO libspdk_sock_posix.so.6.0 00:24:27.223 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:24:27.223 SYMLINK libspdk_fsdev_aio.so 00:24:27.223 CC module/bdev/null/bdev_null_rpc.o 00:24:27.223 CC module/bdev/gpt/vbdev_gpt.o 00:24:27.223 SYMLINK libspdk_sock_posix.so 00:24:27.223 CC module/bdev/delay/vbdev_delay_rpc.o 00:24:27.223 CC module/bdev/error/vbdev_error_rpc.o 00:24:27.223 LIB libspdk_blobfs_bdev.a 00:24:27.223 CC module/bdev/malloc/bdev_malloc_rpc.o 00:24:27.223 SO libspdk_blobfs_bdev.so.6.0 00:24:27.481 SYMLINK libspdk_blobfs_bdev.so 00:24:27.481 LIB libspdk_bdev_error.a 00:24:27.481 LIB libspdk_bdev_delay.a 00:24:27.481 SO libspdk_bdev_error.so.6.0 00:24:27.481 SO libspdk_bdev_delay.so.6.0 00:24:27.481 LIB libspdk_bdev_gpt.a 00:24:27.481 LIB libspdk_bdev_malloc.a 00:24:27.481 SYMLINK libspdk_bdev_error.so 00:24:27.481 CC module/bdev/nvme/bdev_nvme.o 00:24:27.481 SYMLINK libspdk_bdev_delay.so 00:24:27.481 CC module/bdev/nvme/bdev_nvme_rpc.o 00:24:27.481 SO libspdk_bdev_malloc.so.6.0 00:24:27.481 SO libspdk_bdev_gpt.so.6.0 00:24:27.481 LIB libspdk_bdev_lvol.a 00:24:27.740 LIB libspdk_bdev_null.a 00:24:27.740 CC module/bdev/raid/bdev_raid.o 00:24:27.740 SO libspdk_bdev_lvol.so.6.0 00:24:27.740 CC module/bdev/passthru/vbdev_passthru.o 00:24:27.740 SYMLINK libspdk_bdev_gpt.so 00:24:27.740 SYMLINK libspdk_bdev_malloc.so 00:24:27.740 CC module/bdev/raid/bdev_raid_rpc.o 00:24:27.740 SO libspdk_bdev_null.so.6.0 00:24:27.740 SYMLINK libspdk_bdev_lvol.so 00:24:27.740 CC module/bdev/split/vbdev_split.o 00:24:27.740 SYMLINK libspdk_bdev_null.so 00:24:27.740 CC module/bdev/raid/bdev_raid_sb.o 00:24:27.740 CC module/bdev/zone_block/vbdev_zone_block.o 00:24:27.998 CC module/bdev/xnvme/bdev_xnvme.o 00:24:27.998 CC module/bdev/split/vbdev_split_rpc.o 00:24:27.998 CC module/bdev/aio/bdev_aio.o 00:24:27.998 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:24:28.256 CC module/bdev/nvme/nvme_rpc.o 00:24:28.256 CC module/bdev/nvme/bdev_mdns_client.o 00:24:28.256 LIB libspdk_bdev_split.a 00:24:28.256 SO libspdk_bdev_split.so.6.0 00:24:28.256 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:24:28.256 LIB libspdk_bdev_passthru.a 00:24:28.256 SO libspdk_bdev_passthru.so.6.0 00:24:28.256 SYMLINK libspdk_bdev_split.so 00:24:28.256 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:24:28.256 CC module/bdev/raid/raid0.o 00:24:28.514 SYMLINK libspdk_bdev_passthru.so 00:24:28.514 CC module/bdev/raid/raid1.o 00:24:28.514 CC module/bdev/nvme/vbdev_opal.o 00:24:28.514 LIB libspdk_bdev_zone_block.a 00:24:28.514 SO libspdk_bdev_zone_block.so.6.0 00:24:28.514 CC module/bdev/aio/bdev_aio_rpc.o 00:24:28.514 CC module/bdev/ftl/bdev_ftl.o 00:24:28.514 SYMLINK libspdk_bdev_zone_block.so 00:24:28.514 CC module/bdev/raid/concat.o 00:24:28.514 LIB libspdk_bdev_xnvme.a 00:24:28.515 CC module/bdev/iscsi/bdev_iscsi.o 00:24:28.773 SO libspdk_bdev_xnvme.so.3.0 00:24:28.773 CC module/bdev/nvme/vbdev_opal_rpc.o 00:24:28.773 LIB libspdk_bdev_aio.a 00:24:28.773 SO libspdk_bdev_aio.so.6.0 00:24:28.773 SYMLINK libspdk_bdev_xnvme.so 00:24:28.773 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:24:28.773 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:24:28.773 SYMLINK libspdk_bdev_aio.so 00:24:28.773 CC module/bdev/ftl/bdev_ftl_rpc.o 00:24:29.034 CC module/bdev/virtio/bdev_virtio_rpc.o 00:24:29.034 CC module/bdev/virtio/bdev_virtio_scsi.o 00:24:29.034 CC module/bdev/virtio/bdev_virtio_blk.o 00:24:29.034 LIB libspdk_bdev_raid.a 00:24:29.034 LIB libspdk_bdev_ftl.a 00:24:29.034 SO libspdk_bdev_raid.so.6.0 00:24:29.034 LIB libspdk_bdev_iscsi.a 00:24:29.034 SO libspdk_bdev_ftl.so.6.0 00:24:29.333 SO libspdk_bdev_iscsi.so.6.0 00:24:29.333 SYMLINK libspdk_bdev_ftl.so 00:24:29.333 SYMLINK libspdk_bdev_raid.so 00:24:29.333 SYMLINK libspdk_bdev_iscsi.so 00:24:29.592 LIB libspdk_bdev_virtio.a 00:24:29.592 SO libspdk_bdev_virtio.so.6.0 00:24:29.850 SYMLINK libspdk_bdev_virtio.so 00:24:31.226 LIB libspdk_bdev_nvme.a 00:24:31.226 SO libspdk_bdev_nvme.so.7.0 00:24:31.226 SYMLINK libspdk_bdev_nvme.so 00:24:31.793 CC module/event/subsystems/scheduler/scheduler.o 00:24:31.793 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:24:31.793 CC module/event/subsystems/vmd/vmd.o 00:24:31.793 CC module/event/subsystems/vmd/vmd_rpc.o 00:24:31.793 CC module/event/subsystems/sock/sock.o 00:24:31.793 CC module/event/subsystems/keyring/keyring.o 00:24:31.793 CC module/event/subsystems/fsdev/fsdev.o 00:24:31.793 CC module/event/subsystems/iobuf/iobuf.o 00:24:31.793 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:24:32.051 LIB libspdk_event_vhost_blk.a 00:24:32.051 LIB libspdk_event_sock.a 00:24:32.051 LIB libspdk_event_vmd.a 00:24:32.051 LIB libspdk_event_fsdev.a 00:24:32.051 LIB libspdk_event_keyring.a 00:24:32.051 LIB libspdk_event_scheduler.a 00:24:32.051 SO libspdk_event_vhost_blk.so.3.0 00:24:32.051 LIB libspdk_event_iobuf.a 00:24:32.051 SO libspdk_event_sock.so.5.0 00:24:32.051 SO libspdk_event_fsdev.so.1.0 00:24:32.051 SO libspdk_event_vmd.so.6.0 00:24:32.051 SO libspdk_event_keyring.so.1.0 00:24:32.051 SO libspdk_event_scheduler.so.4.0 00:24:32.051 SO libspdk_event_iobuf.so.3.0 00:24:32.051 SYMLINK libspdk_event_vhost_blk.so 00:24:32.051 SYMLINK libspdk_event_sock.so 00:24:32.051 SYMLINK libspdk_event_fsdev.so 00:24:32.051 SYMLINK libspdk_event_keyring.so 00:24:32.051 SYMLINK libspdk_event_vmd.so 00:24:32.051 SYMLINK libspdk_event_scheduler.so 00:24:32.051 SYMLINK libspdk_event_iobuf.so 00:24:32.309 CC module/event/subsystems/accel/accel.o 00:24:32.569 LIB libspdk_event_accel.a 00:24:32.569 SO libspdk_event_accel.so.6.0 00:24:32.569 SYMLINK libspdk_event_accel.so 00:24:33.142 CC module/event/subsystems/bdev/bdev.o 00:24:33.142 LIB libspdk_event_bdev.a 00:24:33.142 SO libspdk_event_bdev.so.6.0 00:24:33.414 SYMLINK libspdk_event_bdev.so 00:24:33.414 CC module/event/subsystems/ublk/ublk.o 00:24:33.414 CC module/event/subsystems/scsi/scsi.o 00:24:33.414 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:24:33.414 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:24:33.414 CC module/event/subsystems/nbd/nbd.o 00:24:33.673 LIB libspdk_event_nbd.a 00:24:33.673 LIB libspdk_event_ublk.a 00:24:33.673 LIB libspdk_event_scsi.a 00:24:33.674 SO libspdk_event_nbd.so.6.0 00:24:33.674 SO libspdk_event_ublk.so.3.0 00:24:33.674 SO libspdk_event_scsi.so.6.0 00:24:33.933 SYMLINK libspdk_event_nbd.so 00:24:33.933 SYMLINK libspdk_event_ublk.so 00:24:33.933 LIB libspdk_event_nvmf.a 00:24:33.933 SYMLINK libspdk_event_scsi.so 00:24:33.933 SO libspdk_event_nvmf.so.6.0 00:24:33.933 SYMLINK libspdk_event_nvmf.so 00:24:34.193 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:24:34.193 CC module/event/subsystems/iscsi/iscsi.o 00:24:34.193 LIB libspdk_event_vhost_scsi.a 00:24:34.451 SO libspdk_event_vhost_scsi.so.3.0 00:24:34.451 LIB libspdk_event_iscsi.a 00:24:34.451 SO libspdk_event_iscsi.so.6.0 00:24:34.451 SYMLINK libspdk_event_vhost_scsi.so 00:24:34.451 SYMLINK libspdk_event_iscsi.so 00:24:34.709 SO libspdk.so.6.0 00:24:34.709 SYMLINK libspdk.so 00:24:34.968 CC app/spdk_lspci/spdk_lspci.o 00:24:34.968 CC app/spdk_nvme_identify/identify.o 00:24:34.968 CC app/spdk_nvme_perf/perf.o 00:24:34.968 CXX app/trace/trace.o 00:24:34.968 CC app/trace_record/trace_record.o 00:24:34.968 CC app/nvmf_tgt/nvmf_main.o 00:24:34.968 CC app/iscsi_tgt/iscsi_tgt.o 00:24:34.968 CC app/spdk_tgt/spdk_tgt.o 00:24:34.968 CC test/thread/poller_perf/poller_perf.o 00:24:34.968 CC examples/util/zipf/zipf.o 00:24:34.968 LINK spdk_lspci 00:24:35.226 LINK nvmf_tgt 00:24:35.226 LINK spdk_tgt 00:24:35.226 LINK iscsi_tgt 00:24:35.226 LINK poller_perf 00:24:35.226 LINK zipf 00:24:35.484 LINK spdk_trace_record 00:24:35.484 CC app/spdk_nvme_discover/discovery_aer.o 00:24:35.484 CC app/spdk_top/spdk_top.o 00:24:35.484 CC app/spdk_dd/spdk_dd.o 00:24:35.742 LINK spdk_trace 00:24:35.742 LINK spdk_nvme_discover 00:24:35.742 CC test/dma/test_dma/test_dma.o 00:24:35.742 CC examples/ioat/perf/perf.o 00:24:35.742 CC app/fio/nvme/fio_plugin.o 00:24:35.742 CC examples/vmd/lsvmd/lsvmd.o 00:24:35.999 LINK lsvmd 00:24:35.999 LINK ioat_perf 00:24:35.999 CC examples/ioat/verify/verify.o 00:24:35.999 LINK spdk_dd 00:24:35.999 CC examples/idxd/perf/perf.o 00:24:36.271 LINK spdk_nvme_identify 00:24:36.271 LINK spdk_nvme_perf 00:24:36.271 CC examples/vmd/led/led.o 00:24:36.271 LINK verify 00:24:36.529 CC test/app/bdev_svc/bdev_svc.o 00:24:36.529 LINK led 00:24:36.529 LINK test_dma 00:24:36.529 TEST_HEADER include/spdk/accel.h 00:24:36.529 TEST_HEADER include/spdk/accel_module.h 00:24:36.529 TEST_HEADER include/spdk/assert.h 00:24:36.529 TEST_HEADER include/spdk/barrier.h 00:24:36.529 TEST_HEADER include/spdk/base64.h 00:24:36.529 TEST_HEADER include/spdk/bdev.h 00:24:36.529 TEST_HEADER include/spdk/bdev_module.h 00:24:36.529 TEST_HEADER include/spdk/bdev_zone.h 00:24:36.529 TEST_HEADER include/spdk/bit_array.h 00:24:36.529 TEST_HEADER include/spdk/bit_pool.h 00:24:36.529 TEST_HEADER include/spdk/blob_bdev.h 00:24:36.529 TEST_HEADER include/spdk/blobfs_bdev.h 00:24:36.529 TEST_HEADER include/spdk/blobfs.h 00:24:36.529 TEST_HEADER include/spdk/blob.h 00:24:36.529 TEST_HEADER include/spdk/conf.h 00:24:36.529 TEST_HEADER include/spdk/config.h 00:24:36.529 TEST_HEADER include/spdk/cpuset.h 00:24:36.529 TEST_HEADER include/spdk/crc16.h 00:24:36.529 TEST_HEADER include/spdk/crc32.h 00:24:36.529 TEST_HEADER include/spdk/crc64.h 00:24:36.529 TEST_HEADER include/spdk/dif.h 00:24:36.529 TEST_HEADER include/spdk/dma.h 00:24:36.529 TEST_HEADER include/spdk/endian.h 00:24:36.529 TEST_HEADER include/spdk/env_dpdk.h 00:24:36.529 TEST_HEADER include/spdk/env.h 00:24:36.529 TEST_HEADER include/spdk/event.h 00:24:36.529 TEST_HEADER include/spdk/fd_group.h 00:24:36.529 TEST_HEADER include/spdk/fd.h 00:24:36.529 TEST_HEADER include/spdk/file.h 00:24:36.529 TEST_HEADER include/spdk/fsdev.h 00:24:36.529 TEST_HEADER include/spdk/fsdev_module.h 00:24:36.529 TEST_HEADER include/spdk/ftl.h 00:24:36.529 TEST_HEADER include/spdk/fuse_dispatcher.h 00:24:36.529 TEST_HEADER include/spdk/gpt_spec.h 00:24:36.529 TEST_HEADER include/spdk/hexlify.h 00:24:36.529 TEST_HEADER include/spdk/histogram_data.h 00:24:36.529 TEST_HEADER include/spdk/idxd.h 00:24:36.529 TEST_HEADER include/spdk/idxd_spec.h 00:24:36.529 LINK idxd_perf 00:24:36.529 TEST_HEADER include/spdk/init.h 00:24:36.529 TEST_HEADER include/spdk/ioat.h 00:24:36.529 TEST_HEADER include/spdk/ioat_spec.h 00:24:36.529 TEST_HEADER include/spdk/iscsi_spec.h 00:24:36.529 TEST_HEADER include/spdk/json.h 00:24:36.529 TEST_HEADER include/spdk/jsonrpc.h 00:24:36.529 TEST_HEADER include/spdk/keyring.h 00:24:36.529 TEST_HEADER include/spdk/keyring_module.h 00:24:36.529 TEST_HEADER include/spdk/likely.h 00:24:36.529 LINK spdk_nvme 00:24:36.529 TEST_HEADER include/spdk/log.h 00:24:36.529 TEST_HEADER include/spdk/lvol.h 00:24:36.529 TEST_HEADER include/spdk/md5.h 00:24:36.529 TEST_HEADER include/spdk/memory.h 00:24:36.529 TEST_HEADER include/spdk/mmio.h 00:24:36.529 TEST_HEADER include/spdk/nbd.h 00:24:36.529 TEST_HEADER include/spdk/net.h 00:24:36.529 TEST_HEADER include/spdk/notify.h 00:24:36.529 TEST_HEADER include/spdk/nvme.h 00:24:36.529 TEST_HEADER include/spdk/nvme_intel.h 00:24:36.529 TEST_HEADER include/spdk/nvme_ocssd.h 00:24:36.529 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:24:36.529 TEST_HEADER include/spdk/nvme_spec.h 00:24:36.529 TEST_HEADER include/spdk/nvme_zns.h 00:24:36.529 TEST_HEADER include/spdk/nvmf_cmd.h 00:24:36.529 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:24:36.529 CC app/vhost/vhost.o 00:24:36.529 CC app/fio/bdev/fio_plugin.o 00:24:36.529 TEST_HEADER include/spdk/nvmf.h 00:24:36.529 TEST_HEADER include/spdk/nvmf_spec.h 00:24:36.529 TEST_HEADER include/spdk/nvmf_transport.h 00:24:36.529 TEST_HEADER include/spdk/opal.h 00:24:36.787 TEST_HEADER include/spdk/opal_spec.h 00:24:36.788 TEST_HEADER include/spdk/pci_ids.h 00:24:36.788 TEST_HEADER include/spdk/pipe.h 00:24:36.788 TEST_HEADER include/spdk/queue.h 00:24:36.788 TEST_HEADER include/spdk/reduce.h 00:24:36.788 TEST_HEADER include/spdk/rpc.h 00:24:36.788 TEST_HEADER include/spdk/scheduler.h 00:24:36.788 TEST_HEADER include/spdk/scsi.h 00:24:36.788 TEST_HEADER include/spdk/scsi_spec.h 00:24:36.788 TEST_HEADER include/spdk/sock.h 00:24:36.788 TEST_HEADER include/spdk/stdinc.h 00:24:36.788 TEST_HEADER include/spdk/string.h 00:24:36.788 TEST_HEADER include/spdk/thread.h 00:24:36.788 TEST_HEADER include/spdk/trace.h 00:24:36.788 TEST_HEADER include/spdk/trace_parser.h 00:24:36.788 TEST_HEADER include/spdk/tree.h 00:24:36.788 CC test/env/mem_callbacks/mem_callbacks.o 00:24:36.788 TEST_HEADER include/spdk/ublk.h 00:24:36.788 TEST_HEADER include/spdk/util.h 00:24:36.788 TEST_HEADER include/spdk/uuid.h 00:24:36.788 TEST_HEADER include/spdk/version.h 00:24:36.788 TEST_HEADER include/spdk/vfio_user_pci.h 00:24:36.788 TEST_HEADER include/spdk/vfio_user_spec.h 00:24:36.788 TEST_HEADER include/spdk/vhost.h 00:24:36.788 TEST_HEADER include/spdk/vmd.h 00:24:36.788 TEST_HEADER include/spdk/xor.h 00:24:36.788 TEST_HEADER include/spdk/zipf.h 00:24:36.788 CXX test/cpp_headers/accel.o 00:24:36.788 LINK bdev_svc 00:24:36.788 CC examples/interrupt_tgt/interrupt_tgt.o 00:24:36.788 CC test/rpc_client/rpc_client_test.o 00:24:36.788 LINK spdk_top 00:24:36.788 CC test/event/event_perf/event_perf.o 00:24:37.045 LINK vhost 00:24:37.045 CXX test/cpp_headers/accel_module.o 00:24:37.045 CC examples/thread/thread/thread_ex.o 00:24:37.045 LINK event_perf 00:24:37.045 LINK interrupt_tgt 00:24:37.045 LINK rpc_client_test 00:24:37.045 CXX test/cpp_headers/assert.o 00:24:37.045 CXX test/cpp_headers/barrier.o 00:24:37.302 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:24:37.303 LINK spdk_bdev 00:24:37.303 LINK thread 00:24:37.303 CC test/accel/dif/dif.o 00:24:37.303 CC test/event/reactor/reactor.o 00:24:37.303 CXX test/cpp_headers/base64.o 00:24:37.303 CC test/event/reactor_perf/reactor_perf.o 00:24:37.303 CC test/event/app_repeat/app_repeat.o 00:24:37.303 LINK mem_callbacks 00:24:37.561 LINK reactor 00:24:37.561 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:24:37.561 LINK reactor_perf 00:24:37.561 CXX test/cpp_headers/bdev.o 00:24:37.561 CC examples/sock/hello_world/hello_sock.o 00:24:37.561 LINK app_repeat 00:24:37.561 CC test/env/vtophys/vtophys.o 00:24:37.561 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:24:37.819 CC test/env/memory/memory_ut.o 00:24:37.819 CXX test/cpp_headers/bdev_module.o 00:24:37.819 LINK vtophys 00:24:37.819 LINK nvme_fuzz 00:24:37.819 LINK env_dpdk_post_init 00:24:37.819 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:24:37.819 LINK hello_sock 00:24:38.078 CC test/event/scheduler/scheduler.o 00:24:38.078 CXX test/cpp_headers/bdev_zone.o 00:24:38.078 CC test/env/pci/pci_ut.o 00:24:38.078 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:24:38.078 CC examples/accel/perf/accel_perf.o 00:24:38.335 CXX test/cpp_headers/bit_array.o 00:24:38.335 LINK scheduler 00:24:38.335 LINK dif 00:24:38.335 CC examples/blob/hello_world/hello_blob.o 00:24:38.335 CC examples/fsdev/hello_world/hello_fsdev.o 00:24:38.335 CXX test/cpp_headers/bit_pool.o 00:24:38.593 LINK hello_blob 00:24:38.593 LINK pci_ut 00:24:38.593 LINK vhost_fuzz 00:24:38.593 CC test/blobfs/mkfs/mkfs.o 00:24:38.593 LINK hello_fsdev 00:24:38.593 CXX test/cpp_headers/blob_bdev.o 00:24:38.851 CC test/lvol/esnap/esnap.o 00:24:38.851 CXX test/cpp_headers/blobfs_bdev.o 00:24:38.851 LINK mkfs 00:24:39.118 CC examples/blob/cli/blobcli.o 00:24:39.119 CC test/app/histogram_perf/histogram_perf.o 00:24:39.119 CXX test/cpp_headers/blobfs.o 00:24:39.119 CC test/app/jsoncat/jsoncat.o 00:24:39.119 LINK accel_perf 00:24:39.119 LINK memory_ut 00:24:39.119 CC examples/nvme/hello_world/hello_world.o 00:24:39.377 LINK histogram_perf 00:24:39.377 CXX test/cpp_headers/blob.o 00:24:39.377 LINK jsoncat 00:24:39.377 CC test/nvme/aer/aer.o 00:24:39.377 CXX test/cpp_headers/conf.o 00:24:39.635 LINK hello_world 00:24:39.635 CC test/nvme/reset/reset.o 00:24:39.635 CC test/app/stub/stub.o 00:24:39.635 CC test/nvme/sgl/sgl.o 00:24:39.635 CC test/nvme/e2edp/nvme_dp.o 00:24:39.635 CXX test/cpp_headers/config.o 00:24:39.635 LINK blobcli 00:24:39.635 CXX test/cpp_headers/cpuset.o 00:24:39.635 LINK stub 00:24:39.893 CC examples/nvme/reconnect/reconnect.o 00:24:39.893 LINK aer 00:24:39.893 LINK reset 00:24:39.893 CXX test/cpp_headers/crc16.o 00:24:39.893 LINK sgl 00:24:39.893 CXX test/cpp_headers/crc32.o 00:24:39.893 CXX test/cpp_headers/crc64.o 00:24:39.893 CXX test/cpp_headers/dif.o 00:24:40.151 LINK iscsi_fuzz 00:24:40.151 CXX test/cpp_headers/dma.o 00:24:40.151 CXX test/cpp_headers/endian.o 00:24:40.151 CXX test/cpp_headers/env_dpdk.o 00:24:40.151 CXX test/cpp_headers/env.o 00:24:40.151 CC test/nvme/overhead/overhead.o 00:24:40.151 LINK reconnect 00:24:40.151 CC examples/nvme/nvme_manage/nvme_manage.o 00:24:40.151 LINK nvme_dp 00:24:40.408 CXX test/cpp_headers/event.o 00:24:40.409 CXX test/cpp_headers/fd_group.o 00:24:40.409 CXX test/cpp_headers/fd.o 00:24:40.409 CXX test/cpp_headers/file.o 00:24:40.409 CC examples/nvme/arbitration/arbitration.o 00:24:40.667 CC examples/nvme/hotplug/hotplug.o 00:24:40.667 CC examples/bdev/hello_world/hello_bdev.o 00:24:40.667 LINK overhead 00:24:40.667 CC test/nvme/err_injection/err_injection.o 00:24:40.667 CC examples/bdev/bdevperf/bdevperf.o 00:24:40.667 CXX test/cpp_headers/fsdev.o 00:24:40.667 CC test/bdev/bdevio/bdevio.o 00:24:40.925 LINK hello_bdev 00:24:40.925 LINK err_injection 00:24:40.925 CC test/nvme/startup/startup.o 00:24:40.925 CXX test/cpp_headers/fsdev_module.o 00:24:40.925 LINK hotplug 00:24:40.925 LINK arbitration 00:24:40.925 LINK nvme_manage 00:24:41.182 CXX test/cpp_headers/ftl.o 00:24:41.182 LINK startup 00:24:41.182 CC test/nvme/reserve/reserve.o 00:24:41.182 CC examples/nvme/cmb_copy/cmb_copy.o 00:24:41.182 CC examples/nvme/abort/abort.o 00:24:41.182 CC test/nvme/simple_copy/simple_copy.o 00:24:41.182 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:24:41.182 LINK bdevio 00:24:41.440 CXX test/cpp_headers/fuse_dispatcher.o 00:24:41.440 LINK cmb_copy 00:24:41.440 CC test/nvme/connect_stress/connect_stress.o 00:24:41.440 LINK reserve 00:24:41.440 LINK pmr_persistence 00:24:41.440 LINK simple_copy 00:24:41.440 CXX test/cpp_headers/gpt_spec.o 00:24:41.698 CXX test/cpp_headers/hexlify.o 00:24:41.698 LINK connect_stress 00:24:41.698 LINK abort 00:24:41.698 CC test/nvme/boot_partition/boot_partition.o 00:24:41.698 LINK bdevperf 00:24:41.698 CC test/nvme/compliance/nvme_compliance.o 00:24:41.698 CXX test/cpp_headers/histogram_data.o 00:24:41.698 CC test/nvme/fused_ordering/fused_ordering.o 00:24:41.956 CC test/nvme/doorbell_aers/doorbell_aers.o 00:24:41.956 CC test/nvme/fdp/fdp.o 00:24:41.956 LINK boot_partition 00:24:41.956 CC test/nvme/cuse/cuse.o 00:24:41.956 CXX test/cpp_headers/idxd.o 00:24:41.956 CXX test/cpp_headers/idxd_spec.o 00:24:41.956 LINK doorbell_aers 00:24:42.214 CXX test/cpp_headers/init.o 00:24:42.214 LINK fused_ordering 00:24:42.214 CXX test/cpp_headers/ioat.o 00:24:42.214 CXX test/cpp_headers/ioat_spec.o 00:24:42.214 LINK nvme_compliance 00:24:42.214 CXX test/cpp_headers/iscsi_spec.o 00:24:42.214 CC examples/nvmf/nvmf/nvmf.o 00:24:42.214 LINK fdp 00:24:42.214 CXX test/cpp_headers/json.o 00:24:42.472 CXX test/cpp_headers/jsonrpc.o 00:24:42.472 CXX test/cpp_headers/keyring.o 00:24:42.472 CXX test/cpp_headers/keyring_module.o 00:24:42.472 CXX test/cpp_headers/likely.o 00:24:42.472 CXX test/cpp_headers/log.o 00:24:42.472 CXX test/cpp_headers/lvol.o 00:24:42.472 CXX test/cpp_headers/md5.o 00:24:42.730 CXX test/cpp_headers/memory.o 00:24:42.730 CXX test/cpp_headers/mmio.o 00:24:42.730 CXX test/cpp_headers/nbd.o 00:24:42.730 CXX test/cpp_headers/net.o 00:24:42.730 CXX test/cpp_headers/notify.o 00:24:42.730 CXX test/cpp_headers/nvme.o 00:24:42.730 LINK nvmf 00:24:42.730 CXX test/cpp_headers/nvme_intel.o 00:24:42.730 CXX test/cpp_headers/nvme_ocssd.o 00:24:42.730 CXX test/cpp_headers/nvme_ocssd_spec.o 00:24:42.988 CXX test/cpp_headers/nvme_spec.o 00:24:42.988 CXX test/cpp_headers/nvme_zns.o 00:24:42.988 CXX test/cpp_headers/nvmf_cmd.o 00:24:42.988 CXX test/cpp_headers/nvmf_fc_spec.o 00:24:42.988 CXX test/cpp_headers/nvmf.o 00:24:42.988 CXX test/cpp_headers/nvmf_spec.o 00:24:42.988 CXX test/cpp_headers/nvmf_transport.o 00:24:42.988 CXX test/cpp_headers/opal.o 00:24:42.988 CXX test/cpp_headers/opal_spec.o 00:24:42.988 CXX test/cpp_headers/pci_ids.o 00:24:43.246 CXX test/cpp_headers/pipe.o 00:24:43.246 CXX test/cpp_headers/queue.o 00:24:43.246 CXX test/cpp_headers/reduce.o 00:24:43.246 CXX test/cpp_headers/rpc.o 00:24:43.246 CXX test/cpp_headers/scheduler.o 00:24:43.246 CXX test/cpp_headers/scsi.o 00:24:43.246 CXX test/cpp_headers/scsi_spec.o 00:24:43.246 CXX test/cpp_headers/sock.o 00:24:43.246 CXX test/cpp_headers/stdinc.o 00:24:43.246 CXX test/cpp_headers/string.o 00:24:43.246 CXX test/cpp_headers/thread.o 00:24:43.246 CXX test/cpp_headers/trace.o 00:24:43.246 CXX test/cpp_headers/trace_parser.o 00:24:43.504 CXX test/cpp_headers/tree.o 00:24:43.504 CXX test/cpp_headers/ublk.o 00:24:43.504 CXX test/cpp_headers/util.o 00:24:43.504 CXX test/cpp_headers/uuid.o 00:24:43.504 CXX test/cpp_headers/version.o 00:24:43.504 CXX test/cpp_headers/vfio_user_pci.o 00:24:43.504 CXX test/cpp_headers/vfio_user_spec.o 00:24:43.504 CXX test/cpp_headers/vhost.o 00:24:43.504 CXX test/cpp_headers/vmd.o 00:24:43.504 CXX test/cpp_headers/xor.o 00:24:43.762 CXX test/cpp_headers/zipf.o 00:24:43.762 LINK cuse 00:24:46.291 LINK esnap 00:24:46.858 00:24:46.858 real 1m42.714s 00:24:46.858 user 9m39.275s 00:24:46.858 sys 1m50.576s 00:24:46.858 01:53:55 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:24:46.858 ************************************ 00:24:46.858 END TEST make 00:24:46.858 ************************************ 00:24:46.858 01:53:55 make -- common/autotest_common.sh@10 -- $ set +x 00:24:46.858 01:53:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:24:46.858 01:53:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:24:46.858 01:53:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:24:46.858 01:53:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:46.858 01:53:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:24:46.858 01:53:55 -- pm/common@44 -- $ pid=5325 00:24:46.858 01:53:55 -- pm/common@50 -- $ kill -TERM 5325 00:24:46.858 01:53:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:46.858 01:53:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:24:46.858 01:53:55 -- pm/common@44 -- $ pid=5326 00:24:46.858 01:53:55 -- pm/common@50 -- $ kill -TERM 5326 00:24:47.117 01:53:55 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:47.117 01:53:55 -- common/autotest_common.sh@1681 -- # lcov --version 00:24:47.117 01:53:55 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:47.117 01:53:56 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:47.117 01:53:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.117 01:53:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.117 01:53:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.117 01:53:56 -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.117 01:53:56 -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.117 01:53:56 -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.117 01:53:56 -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.117 01:53:56 -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.117 01:53:56 -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.117 01:53:56 -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.117 01:53:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.117 01:53:56 -- scripts/common.sh@344 -- # case "$op" in 00:24:47.117 01:53:56 -- scripts/common.sh@345 -- # : 1 00:24:47.117 01:53:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.117 01:53:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.117 01:53:56 -- scripts/common.sh@365 -- # decimal 1 00:24:47.117 01:53:56 -- scripts/common.sh@353 -- # local d=1 00:24:47.117 01:53:56 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.117 01:53:56 -- scripts/common.sh@355 -- # echo 1 00:24:47.117 01:53:56 -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.117 01:53:56 -- scripts/common.sh@366 -- # decimal 2 00:24:47.117 01:53:56 -- scripts/common.sh@353 -- # local d=2 00:24:47.117 01:53:56 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.117 01:53:56 -- scripts/common.sh@355 -- # echo 2 00:24:47.117 01:53:56 -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.117 01:53:56 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.117 01:53:56 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.117 01:53:56 -- scripts/common.sh@368 -- # return 0 00:24:47.117 01:53:56 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.117 01:53:56 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:47.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.117 --rc genhtml_branch_coverage=1 00:24:47.117 --rc genhtml_function_coverage=1 00:24:47.117 --rc genhtml_legend=1 00:24:47.117 --rc geninfo_all_blocks=1 00:24:47.117 --rc geninfo_unexecuted_blocks=1 00:24:47.117 00:24:47.117 ' 00:24:47.117 01:53:56 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:47.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.117 --rc genhtml_branch_coverage=1 00:24:47.117 --rc genhtml_function_coverage=1 00:24:47.117 --rc genhtml_legend=1 00:24:47.117 --rc geninfo_all_blocks=1 00:24:47.117 --rc geninfo_unexecuted_blocks=1 00:24:47.117 00:24:47.117 ' 00:24:47.117 01:53:56 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:47.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.117 --rc genhtml_branch_coverage=1 00:24:47.117 --rc genhtml_function_coverage=1 00:24:47.117 --rc genhtml_legend=1 00:24:47.117 --rc geninfo_all_blocks=1 00:24:47.117 --rc geninfo_unexecuted_blocks=1 00:24:47.117 00:24:47.117 ' 00:24:47.117 01:53:56 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:47.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.117 --rc genhtml_branch_coverage=1 00:24:47.117 --rc genhtml_function_coverage=1 00:24:47.117 --rc genhtml_legend=1 00:24:47.117 --rc geninfo_all_blocks=1 00:24:47.117 --rc geninfo_unexecuted_blocks=1 00:24:47.117 00:24:47.117 ' 00:24:47.117 01:53:56 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:47.117 01:53:56 -- nvmf/common.sh@7 -- # uname -s 00:24:47.117 01:53:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.117 01:53:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.117 01:53:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.117 01:53:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.117 01:53:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.117 01:53:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.117 01:53:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.117 01:53:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.117 01:53:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.117 01:53:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.117 01:53:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96338afd-f13f-4e08-a2c8-83ca5aea5d67 00:24:47.117 01:53:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=96338afd-f13f-4e08-a2c8-83ca5aea5d67 00:24:47.117 01:53:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.117 01:53:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.117 01:53:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:47.117 01:53:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.117 01:53:56 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:47.117 01:53:56 -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.117 01:53:56 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.117 01:53:56 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.118 01:53:56 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.118 01:53:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.118 01:53:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.118 01:53:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.118 01:53:56 -- paths/export.sh@5 -- # export PATH 00:24:47.118 01:53:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.118 01:53:56 -- nvmf/common.sh@51 -- # : 0 00:24:47.118 01:53:56 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.118 01:53:56 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.118 01:53:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.118 01:53:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.118 01:53:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.118 01:53:56 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.118 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.118 01:53:56 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.118 01:53:56 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.118 01:53:56 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.118 01:53:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:24:47.118 01:53:56 -- spdk/autotest.sh@32 -- # uname -s 00:24:47.118 01:53:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:24:47.118 01:53:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:24:47.118 01:53:56 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:24:47.118 01:53:56 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:24:47.118 01:53:56 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:24:47.118 01:53:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:24:47.118 01:53:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:24:47.118 01:53:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:24:47.118 01:53:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:24:47.118 01:53:56 -- spdk/autotest.sh@48 -- # udevadm_pid=55323 00:24:47.118 01:53:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:24:47.118 01:53:56 -- pm/common@17 -- # local monitor 00:24:47.118 01:53:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:24:47.118 01:53:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:24:47.118 01:53:56 -- pm/common@25 -- # sleep 1 00:24:47.118 01:53:56 -- pm/common@21 -- # date +%s 00:24:47.118 01:53:56 -- pm/common@21 -- # date +%s 00:24:47.118 01:53:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728957236 00:24:47.118 01:53:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728957236 00:24:47.376 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728957236_collect-vmstat.pm.log 00:24:47.376 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728957236_collect-cpu-load.pm.log 00:24:48.311 01:53:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:24:48.311 01:53:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:24:48.311 01:53:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:48.311 01:53:57 -- common/autotest_common.sh@10 -- # set +x 00:24:48.311 01:53:57 -- spdk/autotest.sh@59 -- # create_test_list 00:24:48.311 01:53:57 -- common/autotest_common.sh@748 -- # xtrace_disable 00:24:48.311 01:53:57 -- common/autotest_common.sh@10 -- # set +x 00:24:48.311 01:53:57 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:24:48.311 01:53:57 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:24:48.311 01:53:57 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:24:48.311 01:53:57 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:24:48.311 01:53:57 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:24:48.311 01:53:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:24:48.311 01:53:57 -- common/autotest_common.sh@1455 -- # uname 00:24:48.311 01:53:57 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:24:48.311 01:53:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:24:48.311 01:53:57 -- common/autotest_common.sh@1475 -- # uname 00:24:48.311 01:53:57 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:24:48.311 01:53:57 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:24:48.311 01:53:57 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:24:48.311 lcov: LCOV version 1.15 00:24:48.311 01:53:57 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:25:06.427 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:25:06.427 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:25:24.540 01:54:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:25:24.540 01:54:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:24.540 01:54:32 -- common/autotest_common.sh@10 -- # set +x 00:25:24.540 01:54:32 -- spdk/autotest.sh@78 -- # rm -f 00:25:24.540 01:54:32 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:24.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:24.540 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:24.540 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:24.540 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:25:24.540 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:25:24.540 01:54:33 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:25:24.540 01:54:33 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:25:24.540 01:54:33 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:25:24.540 01:54:33 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:25:24.540 01:54:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:24.540 01:54:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:25:24.540 01:54:33 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:24.540 01:54:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:24.540 01:54:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:25:24.540 01:54:33 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:24.540 01:54:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:24.540 01:54:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:25:24.540 01:54:33 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:25:24.540 01:54:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:24.540 01:54:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:25:24.540 01:54:33 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:25:24.540 01:54:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:24.540 01:54:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:25:24.540 01:54:33 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:25:24.540 01:54:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:24.540 01:54:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:25:24.540 01:54:33 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:25:24.540 01:54:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:24.540 01:54:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:25:24.540 01:54:33 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:25:24.540 01:54:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:25:24.540 01:54:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:24.540 01:54:33 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:25:24.540 01:54:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:24.540 01:54:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:24.540 01:54:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:25:24.540 01:54:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:25:24.540 01:54:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:25:24.540 No valid GPT data, bailing 00:25:24.540 01:54:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:24.540 01:54:33 -- scripts/common.sh@394 -- # pt= 00:25:24.540 01:54:33 -- scripts/common.sh@395 -- # return 1 00:25:24.540 01:54:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:25:24.540 1+0 records in 00:25:24.540 1+0 records out 00:25:24.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133705 s, 78.4 MB/s 00:25:24.540 01:54:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:24.540 01:54:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:24.540 01:54:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:25:24.540 01:54:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:25:24.540 01:54:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:25:24.540 No valid GPT data, bailing 00:25:24.540 01:54:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:24.540 01:54:33 -- scripts/common.sh@394 -- # pt= 00:25:24.540 01:54:33 -- scripts/common.sh@395 -- # return 1 00:25:24.540 01:54:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:25:24.540 1+0 records in 00:25:24.540 1+0 records out 00:25:24.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453775 s, 231 MB/s 00:25:24.540 01:54:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:24.540 01:54:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:24.540 01:54:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:25:24.540 01:54:33 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:25:24.540 01:54:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:25:24.540 No valid GPT data, bailing 00:25:24.540 01:54:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:25:24.540 01:54:33 -- scripts/common.sh@394 -- # pt= 00:25:24.540 01:54:33 -- scripts/common.sh@395 -- # return 1 00:25:24.540 01:54:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:25:24.540 1+0 records in 00:25:24.540 1+0 records out 00:25:24.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474172 s, 221 MB/s 00:25:24.540 01:54:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:24.540 01:54:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:24.540 01:54:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:25:24.540 01:54:33 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:25:24.540 01:54:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:25:24.540 No valid GPT data, bailing 00:25:24.540 01:54:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:25:24.540 01:54:33 -- scripts/common.sh@394 -- # pt= 00:25:24.540 01:54:33 -- scripts/common.sh@395 -- # return 1 00:25:24.540 01:54:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:25:24.540 1+0 records in 00:25:24.540 1+0 records out 00:25:24.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00400896 s, 262 MB/s 00:25:24.540 01:54:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:24.540 01:54:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:24.540 01:54:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:25:24.540 01:54:33 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:25:24.540 01:54:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:25:24.540 No valid GPT data, bailing 00:25:24.540 01:54:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:25:24.799 01:54:33 -- scripts/common.sh@394 -- # pt= 00:25:24.799 01:54:33 -- scripts/common.sh@395 -- # return 1 00:25:24.799 01:54:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:25:24.799 1+0 records in 00:25:24.799 1+0 records out 00:25:24.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441705 s, 237 MB/s 00:25:24.799 01:54:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:24.799 01:54:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:24.799 01:54:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:25:24.799 01:54:33 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:25:24.799 01:54:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:25:24.799 No valid GPT data, bailing 00:25:24.799 01:54:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:25:24.799 01:54:33 -- scripts/common.sh@394 -- # pt= 00:25:24.799 01:54:33 -- scripts/common.sh@395 -- # return 1 00:25:24.799 01:54:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:25:24.799 1+0 records in 00:25:24.799 1+0 records out 00:25:24.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458574 s, 229 MB/s 00:25:24.799 01:54:33 -- spdk/autotest.sh@105 -- # sync 00:25:24.799 01:54:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:25:24.799 01:54:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:25:24.799 01:54:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:25:26.702 01:54:35 -- spdk/autotest.sh@111 -- # uname -s 00:25:26.702 01:54:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:25:26.702 01:54:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:25:26.702 01:54:35 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:25:27.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:27.835 Hugepages 00:25:27.835 node hugesize free / total 00:25:27.835 node0 1048576kB 0 / 0 00:25:27.835 node0 2048kB 0 / 0 00:25:27.835 00:25:27.835 Type BDF Vendor Device NUMA Driver Device Block devices 00:25:27.835 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:25:27.835 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:25:27.835 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:25:28.093 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:25:28.093 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:25:28.093 01:54:36 -- spdk/autotest.sh@117 -- # uname -s 00:25:28.093 01:54:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:25:28.093 01:54:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:25:28.093 01:54:36 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:28.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:29.227 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:29.227 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:29.227 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:29.227 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:29.227 01:54:38 -- common/autotest_common.sh@1515 -- # sleep 1 00:25:30.601 01:54:39 -- common/autotest_common.sh@1516 -- # bdfs=() 00:25:30.602 01:54:39 -- common/autotest_common.sh@1516 -- # local bdfs 00:25:30.602 01:54:39 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:25:30.602 01:54:39 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:25:30.602 01:54:39 -- common/autotest_common.sh@1496 -- # bdfs=() 00:25:30.602 01:54:39 -- common/autotest_common.sh@1496 -- # local bdfs 00:25:30.602 01:54:39 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:30.602 01:54:39 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:30.602 01:54:39 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:25:30.602 01:54:39 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:25:30.602 01:54:39 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:25:30.602 01:54:39 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:30.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:30.860 Waiting for block devices as requested 00:25:30.860 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:31.118 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:31.118 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:31.118 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:36.435 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:36.435 01:54:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:25:36.435 01:54:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:25:36.435 01:54:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:25:36.435 01:54:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:25:36.435 01:54:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:25:36.435 01:54:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:25:36.435 01:54:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:25:36.435 01:54:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:25:36.435 01:54:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1541 -- # continue 00:25:36.435 01:54:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:25:36.435 01:54:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:25:36.435 01:54:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:25:36.435 01:54:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:25:36.435 01:54:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:25:36.435 01:54:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:25:36.435 01:54:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:25:36.435 01:54:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:25:36.435 01:54:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1541 -- # continue 00:25:36.435 01:54:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:25:36.435 01:54:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:25:36.435 01:54:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:25:36.435 01:54:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:25:36.435 01:54:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1541 -- # continue 00:25:36.435 01:54:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:25:36.435 01:54:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:25:36.435 01:54:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:25:36.435 01:54:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:25:36.435 01:54:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:25:36.435 01:54:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:25:36.435 01:54:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:25:36.435 01:54:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:25:36.435 01:54:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:25:36.435 01:54:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:25:36.435 01:54:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:25:36.435 01:54:45 -- common/autotest_common.sh@1541 -- # continue 00:25:36.435 01:54:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:25:36.435 01:54:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:36.435 01:54:45 -- common/autotest_common.sh@10 -- # set +x 00:25:36.435 01:54:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:25:36.435 01:54:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.435 01:54:45 -- common/autotest_common.sh@10 -- # set +x 00:25:36.435 01:54:45 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:37.001 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:37.567 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:37.567 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:37.567 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:37.567 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:37.826 01:54:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:25:37.826 01:54:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.826 01:54:46 -- common/autotest_common.sh@10 -- # set +x 00:25:37.826 01:54:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:25:37.826 01:54:46 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:25:37.826 01:54:46 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:25:37.826 01:54:46 -- common/autotest_common.sh@1561 -- # bdfs=() 00:25:37.826 01:54:46 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:25:37.826 01:54:46 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:25:37.826 01:54:46 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:25:37.826 01:54:46 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:25:37.826 01:54:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:25:37.826 01:54:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:25:37.826 01:54:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:37.826 01:54:46 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:37.826 01:54:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:25:37.826 01:54:46 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:25:37.826 01:54:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:25:37.826 01:54:46 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:25:37.826 01:54:46 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:25:37.826 01:54:46 -- common/autotest_common.sh@1564 -- # device=0x0010 00:25:37.826 01:54:46 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:37.826 01:54:46 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:25:37.826 01:54:46 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:25:37.826 01:54:46 -- common/autotest_common.sh@1564 -- # device=0x0010 00:25:37.826 01:54:46 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:37.826 01:54:46 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:25:37.826 01:54:46 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:25:37.826 01:54:46 -- common/autotest_common.sh@1564 -- # device=0x0010 00:25:37.826 01:54:46 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:37.826 01:54:46 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:25:37.826 01:54:46 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:25:37.826 01:54:46 -- common/autotest_common.sh@1564 -- # device=0x0010 00:25:37.826 01:54:46 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:37.826 01:54:46 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:25:37.826 01:54:46 -- common/autotest_common.sh@1570 -- # return 0 00:25:37.826 01:54:46 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:25:37.826 01:54:46 -- common/autotest_common.sh@1578 -- # return 0 00:25:37.826 01:54:46 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:25:37.826 01:54:46 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:25:37.826 01:54:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:25:37.826 01:54:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:25:37.826 01:54:46 -- spdk/autotest.sh@149 -- # timing_enter lib 00:25:37.826 01:54:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.826 01:54:46 -- common/autotest_common.sh@10 -- # set +x 00:25:37.826 01:54:46 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:25:37.826 01:54:46 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:25:37.826 01:54:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:37.826 01:54:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:37.826 01:54:46 -- common/autotest_common.sh@10 -- # set +x 00:25:37.826 ************************************ 00:25:37.826 START TEST env 00:25:37.826 ************************************ 00:25:37.826 01:54:46 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:25:38.086 * Looking for test storage... 00:25:38.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:25:38.086 01:54:46 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:38.086 01:54:46 env -- common/autotest_common.sh@1681 -- # lcov --version 00:25:38.086 01:54:46 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:38.086 01:54:46 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:38.086 01:54:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.086 01:54:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.086 01:54:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.086 01:54:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.086 01:54:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.086 01:54:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.086 01:54:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.086 01:54:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.086 01:54:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.086 01:54:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.086 01:54:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.086 01:54:46 env -- scripts/common.sh@344 -- # case "$op" in 00:25:38.086 01:54:46 env -- scripts/common.sh@345 -- # : 1 00:25:38.086 01:54:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.086 01:54:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.086 01:54:46 env -- scripts/common.sh@365 -- # decimal 1 00:25:38.086 01:54:46 env -- scripts/common.sh@353 -- # local d=1 00:25:38.086 01:54:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.086 01:54:46 env -- scripts/common.sh@355 -- # echo 1 00:25:38.086 01:54:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.086 01:54:46 env -- scripts/common.sh@366 -- # decimal 2 00:25:38.086 01:54:46 env -- scripts/common.sh@353 -- # local d=2 00:25:38.086 01:54:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.086 01:54:46 env -- scripts/common.sh@355 -- # echo 2 00:25:38.086 01:54:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.086 01:54:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.086 01:54:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.086 01:54:46 env -- scripts/common.sh@368 -- # return 0 00:25:38.086 01:54:46 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.086 01:54:46 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.086 --rc genhtml_branch_coverage=1 00:25:38.086 --rc genhtml_function_coverage=1 00:25:38.086 --rc genhtml_legend=1 00:25:38.086 --rc geninfo_all_blocks=1 00:25:38.086 --rc geninfo_unexecuted_blocks=1 00:25:38.086 00:25:38.086 ' 00:25:38.086 01:54:46 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.086 --rc genhtml_branch_coverage=1 00:25:38.086 --rc genhtml_function_coverage=1 00:25:38.086 --rc genhtml_legend=1 00:25:38.086 --rc geninfo_all_blocks=1 00:25:38.086 --rc geninfo_unexecuted_blocks=1 00:25:38.086 00:25:38.086 ' 00:25:38.086 01:54:46 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.086 --rc genhtml_branch_coverage=1 00:25:38.086 --rc genhtml_function_coverage=1 00:25:38.086 --rc genhtml_legend=1 00:25:38.086 --rc geninfo_all_blocks=1 00:25:38.086 --rc geninfo_unexecuted_blocks=1 00:25:38.086 00:25:38.086 ' 00:25:38.086 01:54:46 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.086 --rc genhtml_branch_coverage=1 00:25:38.086 --rc genhtml_function_coverage=1 00:25:38.086 --rc genhtml_legend=1 00:25:38.086 --rc geninfo_all_blocks=1 00:25:38.086 --rc geninfo_unexecuted_blocks=1 00:25:38.086 00:25:38.086 ' 00:25:38.086 01:54:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:25:38.086 01:54:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:38.086 01:54:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:38.086 01:54:46 env -- common/autotest_common.sh@10 -- # set +x 00:25:38.086 ************************************ 00:25:38.086 START TEST env_memory 00:25:38.086 ************************************ 00:25:38.086 01:54:46 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:25:38.086 00:25:38.086 00:25:38.086 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.086 http://cunit.sourceforge.net/ 00:25:38.086 00:25:38.086 00:25:38.086 Suite: memory 00:25:38.086 Test: alloc and free memory map ...[2024-10-15 01:54:47.038686] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:25:38.086 passed 00:25:38.345 Test: mem map translation ...[2024-10-15 01:54:47.099454] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:25:38.345 [2024-10-15 01:54:47.099560] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:25:38.345 [2024-10-15 01:54:47.099668] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:25:38.345 [2024-10-15 01:54:47.099703] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:25:38.345 passed 00:25:38.345 Test: mem map registration ...[2024-10-15 01:54:47.198631] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:25:38.345 [2024-10-15 01:54:47.198736] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:25:38.345 passed 00:25:38.345 Test: mem map adjacent registrations ...passed 00:25:38.345 00:25:38.345 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.345 suites 1 1 n/a 0 0 00:25:38.345 tests 4 4 4 0 0 00:25:38.345 asserts 152 152 152 0 n/a 00:25:38.345 00:25:38.345 Elapsed time = 0.349 seconds 00:25:38.345 00:25:38.345 real 0m0.390s 00:25:38.345 user 0m0.350s 00:25:38.345 sys 0m0.031s 00:25:38.345 01:54:47 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:38.345 01:54:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:25:38.345 ************************************ 00:25:38.345 END TEST env_memory 00:25:38.345 ************************************ 00:25:38.604 01:54:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:25:38.604 01:54:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:38.604 01:54:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:38.604 01:54:47 env -- common/autotest_common.sh@10 -- # set +x 00:25:38.604 ************************************ 00:25:38.604 START TEST env_vtophys 00:25:38.604 ************************************ 00:25:38.604 01:54:47 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:25:38.604 EAL: lib.eal log level changed from notice to debug 00:25:38.604 EAL: Detected lcore 0 as core 0 on socket 0 00:25:38.604 EAL: Detected lcore 1 as core 0 on socket 0 00:25:38.604 EAL: Detected lcore 2 as core 0 on socket 0 00:25:38.604 EAL: Detected lcore 3 as core 0 on socket 0 00:25:38.604 EAL: Detected lcore 4 as core 0 on socket 0 00:25:38.604 EAL: Detected lcore 5 as core 0 on socket 0 00:25:38.604 EAL: Detected lcore 6 as core 0 on socket 0 00:25:38.604 EAL: Detected lcore 7 as core 0 on socket 0 00:25:38.604 EAL: Detected lcore 8 as core 0 on socket 0 00:25:38.604 EAL: Detected lcore 9 as core 0 on socket 0 00:25:38.604 EAL: Maximum logical cores by configuration: 128 00:25:38.604 EAL: Detected CPU lcores: 10 00:25:38.604 EAL: Detected NUMA nodes: 1 00:25:38.604 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:25:38.604 EAL: Detected shared linkage of DPDK 00:25:38.604 EAL: No shared files mode enabled, IPC will be disabled 00:25:38.604 EAL: Selected IOVA mode 'PA' 00:25:38.604 EAL: Probing VFIO support... 00:25:38.604 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:25:38.604 EAL: VFIO modules not loaded, skipping VFIO support... 00:25:38.604 EAL: Ask a virtual area of 0x2e000 bytes 00:25:38.604 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:25:38.604 EAL: Setting up physically contiguous memory... 00:25:38.604 EAL: Setting maximum number of open files to 524288 00:25:38.604 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:25:38.604 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:25:38.604 EAL: Ask a virtual area of 0x61000 bytes 00:25:38.604 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:25:38.604 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:38.604 EAL: Ask a virtual area of 0x400000000 bytes 00:25:38.604 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:25:38.604 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:25:38.604 EAL: Ask a virtual area of 0x61000 bytes 00:25:38.604 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:25:38.604 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:38.604 EAL: Ask a virtual area of 0x400000000 bytes 00:25:38.604 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:25:38.604 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:25:38.604 EAL: Ask a virtual area of 0x61000 bytes 00:25:38.604 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:25:38.604 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:38.604 EAL: Ask a virtual area of 0x400000000 bytes 00:25:38.604 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:25:38.604 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:25:38.604 EAL: Ask a virtual area of 0x61000 bytes 00:25:38.604 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:25:38.604 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:38.604 EAL: Ask a virtual area of 0x400000000 bytes 00:25:38.604 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:25:38.604 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:25:38.604 EAL: Hugepages will be freed exactly as allocated. 00:25:38.604 EAL: No shared files mode enabled, IPC is disabled 00:25:38.604 EAL: No shared files mode enabled, IPC is disabled 00:25:38.604 EAL: TSC frequency is ~2200000 KHz 00:25:38.604 EAL: Main lcore 0 is ready (tid=7fb31bdd9a40;cpuset=[0]) 00:25:38.604 EAL: Trying to obtain current memory policy. 00:25:38.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:38.604 EAL: Restoring previous memory policy: 0 00:25:38.604 EAL: request: mp_malloc_sync 00:25:38.604 EAL: No shared files mode enabled, IPC is disabled 00:25:38.604 EAL: Heap on socket 0 was expanded by 2MB 00:25:38.604 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:25:38.604 EAL: No PCI address specified using 'addr=' in: bus=pci 00:25:38.604 EAL: Mem event callback 'spdk:(nil)' registered 00:25:38.604 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:25:38.862 00:25:38.862 00:25:38.862 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.862 http://cunit.sourceforge.net/ 00:25:38.862 00:25:38.862 00:25:38.862 Suite: components_suite 00:25:39.121 Test: vtophys_malloc_test ...passed 00:25:39.379 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:25:39.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:39.379 EAL: Restoring previous memory policy: 4 00:25:39.379 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.379 EAL: request: mp_malloc_sync 00:25:39.379 EAL: No shared files mode enabled, IPC is disabled 00:25:39.379 EAL: Heap on socket 0 was expanded by 4MB 00:25:39.379 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.379 EAL: request: mp_malloc_sync 00:25:39.379 EAL: No shared files mode enabled, IPC is disabled 00:25:39.379 EAL: Heap on socket 0 was shrunk by 4MB 00:25:39.379 EAL: Trying to obtain current memory policy. 00:25:39.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:39.379 EAL: Restoring previous memory policy: 4 00:25:39.379 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.379 EAL: request: mp_malloc_sync 00:25:39.379 EAL: No shared files mode enabled, IPC is disabled 00:25:39.379 EAL: Heap on socket 0 was expanded by 6MB 00:25:39.379 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.379 EAL: request: mp_malloc_sync 00:25:39.379 EAL: No shared files mode enabled, IPC is disabled 00:25:39.379 EAL: Heap on socket 0 was shrunk by 6MB 00:25:39.379 EAL: Trying to obtain current memory policy. 00:25:39.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:39.379 EAL: Restoring previous memory policy: 4 00:25:39.379 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.379 EAL: request: mp_malloc_sync 00:25:39.379 EAL: No shared files mode enabled, IPC is disabled 00:25:39.379 EAL: Heap on socket 0 was expanded by 10MB 00:25:39.379 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.379 EAL: request: mp_malloc_sync 00:25:39.379 EAL: No shared files mode enabled, IPC is disabled 00:25:39.379 EAL: Heap on socket 0 was shrunk by 10MB 00:25:39.379 EAL: Trying to obtain current memory policy. 00:25:39.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:39.379 EAL: Restoring previous memory policy: 4 00:25:39.379 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.379 EAL: request: mp_malloc_sync 00:25:39.379 EAL: No shared files mode enabled, IPC is disabled 00:25:39.379 EAL: Heap on socket 0 was expanded by 18MB 00:25:39.379 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.379 EAL: request: mp_malloc_sync 00:25:39.379 EAL: No shared files mode enabled, IPC is disabled 00:25:39.379 EAL: Heap on socket 0 was shrunk by 18MB 00:25:39.379 EAL: Trying to obtain current memory policy. 00:25:39.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:39.379 EAL: Restoring previous memory policy: 4 00:25:39.379 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.379 EAL: request: mp_malloc_sync 00:25:39.379 EAL: No shared files mode enabled, IPC is disabled 00:25:39.379 EAL: Heap on socket 0 was expanded by 34MB 00:25:39.379 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.379 EAL: request: mp_malloc_sync 00:25:39.379 EAL: No shared files mode enabled, IPC is disabled 00:25:39.379 EAL: Heap on socket 0 was shrunk by 34MB 00:25:39.379 EAL: Trying to obtain current memory policy. 00:25:39.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:39.379 EAL: Restoring previous memory policy: 4 00:25:39.379 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.379 EAL: request: mp_malloc_sync 00:25:39.379 EAL: No shared files mode enabled, IPC is disabled 00:25:39.379 EAL: Heap on socket 0 was expanded by 66MB 00:25:39.637 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.637 EAL: request: mp_malloc_sync 00:25:39.637 EAL: No shared files mode enabled, IPC is disabled 00:25:39.637 EAL: Heap on socket 0 was shrunk by 66MB 00:25:39.637 EAL: Trying to obtain current memory policy. 00:25:39.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:39.637 EAL: Restoring previous memory policy: 4 00:25:39.637 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.638 EAL: request: mp_malloc_sync 00:25:39.638 EAL: No shared files mode enabled, IPC is disabled 00:25:39.638 EAL: Heap on socket 0 was expanded by 130MB 00:25:39.916 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.916 EAL: request: mp_malloc_sync 00:25:39.916 EAL: No shared files mode enabled, IPC is disabled 00:25:39.916 EAL: Heap on socket 0 was shrunk by 130MB 00:25:40.174 EAL: Trying to obtain current memory policy. 00:25:40.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:40.174 EAL: Restoring previous memory policy: 4 00:25:40.174 EAL: Calling mem event callback 'spdk:(nil)' 00:25:40.174 EAL: request: mp_malloc_sync 00:25:40.174 EAL: No shared files mode enabled, IPC is disabled 00:25:40.174 EAL: Heap on socket 0 was expanded by 258MB 00:25:40.740 EAL: Calling mem event callback 'spdk:(nil)' 00:25:40.740 EAL: request: mp_malloc_sync 00:25:40.740 EAL: No shared files mode enabled, IPC is disabled 00:25:40.740 EAL: Heap on socket 0 was shrunk by 258MB 00:25:40.999 EAL: Trying to obtain current memory policy. 00:25:40.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:41.259 EAL: Restoring previous memory policy: 4 00:25:41.259 EAL: Calling mem event callback 'spdk:(nil)' 00:25:41.259 EAL: request: mp_malloc_sync 00:25:41.259 EAL: No shared files mode enabled, IPC is disabled 00:25:41.259 EAL: Heap on socket 0 was expanded by 514MB 00:25:42.196 EAL: Calling mem event callback 'spdk:(nil)' 00:25:42.196 EAL: request: mp_malloc_sync 00:25:42.196 EAL: No shared files mode enabled, IPC is disabled 00:25:42.196 EAL: Heap on socket 0 was shrunk by 514MB 00:25:42.763 EAL: Trying to obtain current memory policy. 00:25:42.763 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:43.022 EAL: Restoring previous memory policy: 4 00:25:43.022 EAL: Calling mem event callback 'spdk:(nil)' 00:25:43.022 EAL: request: mp_malloc_sync 00:25:43.022 EAL: No shared files mode enabled, IPC is disabled 00:25:43.022 EAL: Heap on socket 0 was expanded by 1026MB 00:25:44.923 EAL: Calling mem event callback 'spdk:(nil)' 00:25:44.923 EAL: request: mp_malloc_sync 00:25:44.923 EAL: No shared files mode enabled, IPC is disabled 00:25:44.923 EAL: Heap on socket 0 was shrunk by 1026MB 00:25:46.298 passed 00:25:46.298 00:25:46.298 Run Summary: Type Total Ran Passed Failed Inactive 00:25:46.298 suites 1 1 n/a 0 0 00:25:46.298 tests 2 2 2 0 0 00:25:46.298 asserts 5691 5691 5691 0 n/a 00:25:46.298 00:25:46.298 Elapsed time = 7.529 seconds 00:25:46.298 EAL: Calling mem event callback 'spdk:(nil)' 00:25:46.298 EAL: request: mp_malloc_sync 00:25:46.298 EAL: No shared files mode enabled, IPC is disabled 00:25:46.298 EAL: Heap on socket 0 was shrunk by 2MB 00:25:46.298 EAL: No shared files mode enabled, IPC is disabled 00:25:46.298 EAL: No shared files mode enabled, IPC is disabled 00:25:46.298 EAL: No shared files mode enabled, IPC is disabled 00:25:46.298 00:25:46.298 real 0m7.874s 00:25:46.298 user 0m6.634s 00:25:46.298 sys 0m1.060s 00:25:46.298 ************************************ 00:25:46.298 END TEST env_vtophys 00:25:46.298 ************************************ 00:25:46.298 01:54:55 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:46.298 01:54:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:25:46.556 01:54:55 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:25:46.556 01:54:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:46.556 01:54:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:46.556 01:54:55 env -- common/autotest_common.sh@10 -- # set +x 00:25:46.556 ************************************ 00:25:46.556 START TEST env_pci 00:25:46.556 ************************************ 00:25:46.556 01:54:55 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:25:46.556 00:25:46.556 00:25:46.556 CUnit - A unit testing framework for C - Version 2.1-3 00:25:46.556 http://cunit.sourceforge.net/ 00:25:46.556 00:25:46.556 00:25:46.556 Suite: pci 00:25:46.556 Test: pci_hook ...[2024-10-15 01:54:55.372367] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58194 has claimed it 00:25:46.556 EAL: Cannot find device (10000:00:01.0) 00:25:46.556 passed 00:25:46.556 00:25:46.556 Run Summary: Type Total Ran Passed Failed Inactive 00:25:46.556 suites 1 1 n/a 0 0 00:25:46.556 tests 1 1 1 0 0 00:25:46.556 asserts 25 25 25 0 n/a 00:25:46.556 00:25:46.556 Elapsed time = 0.009 seconds 00:25:46.556 EAL: Failed to attach device on primary process 00:25:46.556 ************************************ 00:25:46.556 END TEST env_pci 00:25:46.556 ************************************ 00:25:46.556 00:25:46.556 real 0m0.086s 00:25:46.556 user 0m0.036s 00:25:46.556 sys 0m0.049s 00:25:46.556 01:54:55 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:46.556 01:54:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:25:46.556 01:54:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:25:46.556 01:54:55 env -- env/env.sh@15 -- # uname 00:25:46.556 01:54:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:25:46.556 01:54:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:25:46.556 01:54:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:25:46.556 01:54:55 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:46.556 01:54:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:46.556 01:54:55 env -- common/autotest_common.sh@10 -- # set +x 00:25:46.556 ************************************ 00:25:46.556 START TEST env_dpdk_post_init 00:25:46.556 ************************************ 00:25:46.556 01:54:55 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:25:46.556 EAL: Detected CPU lcores: 10 00:25:46.556 EAL: Detected NUMA nodes: 1 00:25:46.556 EAL: Detected shared linkage of DPDK 00:25:46.815 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:25:46.815 EAL: Selected IOVA mode 'PA' 00:25:46.815 TELEMETRY: No legacy callbacks, legacy socket not created 00:25:46.815 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:25:46.815 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:25:46.815 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:25:46.815 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:25:46.815 Starting DPDK initialization... 00:25:46.815 Starting SPDK post initialization... 00:25:46.815 SPDK NVMe probe 00:25:46.815 Attaching to 0000:00:10.0 00:25:46.815 Attaching to 0000:00:11.0 00:25:46.815 Attaching to 0000:00:12.0 00:25:46.815 Attaching to 0000:00:13.0 00:25:46.815 Attached to 0000:00:10.0 00:25:46.815 Attached to 0000:00:11.0 00:25:46.815 Attached to 0000:00:13.0 00:25:46.815 Attached to 0000:00:12.0 00:25:46.815 Cleaning up... 00:25:46.815 00:25:46.815 real 0m0.317s 00:25:46.815 user 0m0.098s 00:25:46.815 sys 0m0.119s 00:25:46.815 01:54:55 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:46.815 01:54:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:25:46.815 ************************************ 00:25:46.815 END TEST env_dpdk_post_init 00:25:46.815 ************************************ 00:25:47.073 01:54:55 env -- env/env.sh@26 -- # uname 00:25:47.073 01:54:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:25:47.073 01:54:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:25:47.073 01:54:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:47.073 01:54:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:47.073 01:54:55 env -- common/autotest_common.sh@10 -- # set +x 00:25:47.073 ************************************ 00:25:47.073 START TEST env_mem_callbacks 00:25:47.073 ************************************ 00:25:47.073 01:54:55 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:25:47.073 EAL: Detected CPU lcores: 10 00:25:47.073 EAL: Detected NUMA nodes: 1 00:25:47.073 EAL: Detected shared linkage of DPDK 00:25:47.073 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:25:47.073 EAL: Selected IOVA mode 'PA' 00:25:47.073 00:25:47.073 00:25:47.073 CUnit - A unit testing framework for C - Version 2.1-3 00:25:47.073 http://cunit.sourceforge.net/ 00:25:47.073 00:25:47.073 00:25:47.073 Suite: memory 00:25:47.073 Test: test ... 00:25:47.073 register 0x200000200000 2097152 00:25:47.073 malloc 3145728 00:25:47.073 TELEMETRY: No legacy callbacks, legacy socket not created 00:25:47.073 register 0x200000400000 4194304 00:25:47.073 buf 0x2000004fffc0 len 3145728 PASSED 00:25:47.073 malloc 64 00:25:47.073 buf 0x2000004ffec0 len 64 PASSED 00:25:47.073 malloc 4194304 00:25:47.073 register 0x200000800000 6291456 00:25:47.073 buf 0x2000009fffc0 len 4194304 PASSED 00:25:47.073 free 0x2000004fffc0 3145728 00:25:47.073 free 0x2000004ffec0 64 00:25:47.073 unregister 0x200000400000 4194304 PASSED 00:25:47.073 free 0x2000009fffc0 4194304 00:25:47.073 unregister 0x200000800000 6291456 PASSED 00:25:47.073 malloc 8388608 00:25:47.073 register 0x200000400000 10485760 00:25:47.332 buf 0x2000005fffc0 len 8388608 PASSED 00:25:47.332 free 0x2000005fffc0 8388608 00:25:47.332 unregister 0x200000400000 10485760 PASSED 00:25:47.332 passed 00:25:47.332 00:25:47.332 Run Summary: Type Total Ran Passed Failed Inactive 00:25:47.332 suites 1 1 n/a 0 0 00:25:47.332 tests 1 1 1 0 0 00:25:47.332 asserts 15 15 15 0 n/a 00:25:47.332 00:25:47.332 Elapsed time = 0.072 seconds 00:25:47.332 00:25:47.332 real 0m0.282s 00:25:47.332 user 0m0.106s 00:25:47.332 sys 0m0.073s 00:25:47.332 01:54:56 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:47.332 01:54:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:25:47.332 ************************************ 00:25:47.332 END TEST env_mem_callbacks 00:25:47.332 ************************************ 00:25:47.332 00:25:47.332 real 0m9.409s 00:25:47.332 user 0m7.433s 00:25:47.332 sys 0m1.570s 00:25:47.332 ************************************ 00:25:47.332 END TEST env 00:25:47.332 ************************************ 00:25:47.332 01:54:56 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:47.332 01:54:56 env -- common/autotest_common.sh@10 -- # set +x 00:25:47.332 01:54:56 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:25:47.332 01:54:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:47.332 01:54:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:47.332 01:54:56 -- common/autotest_common.sh@10 -- # set +x 00:25:47.332 ************************************ 00:25:47.332 START TEST rpc 00:25:47.332 ************************************ 00:25:47.332 01:54:56 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:25:47.332 * Looking for test storage... 00:25:47.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:25:47.332 01:54:56 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:47.332 01:54:56 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:47.332 01:54:56 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:25:47.590 01:54:56 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:47.590 01:54:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.590 01:54:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.590 01:54:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.590 01:54:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.590 01:54:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.590 01:54:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.590 01:54:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.590 01:54:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.590 01:54:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.590 01:54:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.590 01:54:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.590 01:54:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:25:47.590 01:54:56 rpc -- scripts/common.sh@345 -- # : 1 00:25:47.590 01:54:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.590 01:54:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.590 01:54:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:25:47.590 01:54:56 rpc -- scripts/common.sh@353 -- # local d=1 00:25:47.590 01:54:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.590 01:54:56 rpc -- scripts/common.sh@355 -- # echo 1 00:25:47.590 01:54:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.590 01:54:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:25:47.590 01:54:56 rpc -- scripts/common.sh@353 -- # local d=2 00:25:47.591 01:54:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.591 01:54:56 rpc -- scripts/common.sh@355 -- # echo 2 00:25:47.591 01:54:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.591 01:54:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.591 01:54:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.591 01:54:56 rpc -- scripts/common.sh@368 -- # return 0 00:25:47.591 01:54:56 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.591 01:54:56 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:47.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.591 --rc genhtml_branch_coverage=1 00:25:47.591 --rc genhtml_function_coverage=1 00:25:47.591 --rc genhtml_legend=1 00:25:47.591 --rc geninfo_all_blocks=1 00:25:47.591 --rc geninfo_unexecuted_blocks=1 00:25:47.591 00:25:47.591 ' 00:25:47.591 01:54:56 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:47.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.591 --rc genhtml_branch_coverage=1 00:25:47.591 --rc genhtml_function_coverage=1 00:25:47.591 --rc genhtml_legend=1 00:25:47.591 --rc geninfo_all_blocks=1 00:25:47.591 --rc geninfo_unexecuted_blocks=1 00:25:47.591 00:25:47.591 ' 00:25:47.591 01:54:56 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:47.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.591 --rc genhtml_branch_coverage=1 00:25:47.591 --rc genhtml_function_coverage=1 00:25:47.591 --rc genhtml_legend=1 00:25:47.591 --rc geninfo_all_blocks=1 00:25:47.591 --rc geninfo_unexecuted_blocks=1 00:25:47.591 00:25:47.591 ' 00:25:47.591 01:54:56 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:47.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.591 --rc genhtml_branch_coverage=1 00:25:47.591 --rc genhtml_function_coverage=1 00:25:47.591 --rc genhtml_legend=1 00:25:47.591 --rc geninfo_all_blocks=1 00:25:47.591 --rc geninfo_unexecuted_blocks=1 00:25:47.591 00:25:47.591 ' 00:25:47.591 01:54:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58321 00:25:47.591 01:54:56 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:25:47.591 01:54:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:47.591 01:54:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58321 00:25:47.591 01:54:56 rpc -- common/autotest_common.sh@831 -- # '[' -z 58321 ']' 00:25:47.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.591 01:54:56 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.591 01:54:56 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:47.591 01:54:56 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.591 01:54:56 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:47.591 01:54:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:47.591 [2024-10-15 01:54:56.515638] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:25:47.591 [2024-10-15 01:54:56.515790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58321 ] 00:25:47.849 [2024-10-15 01:54:56.686153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.199 [2024-10-15 01:54:57.003876] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:25:48.199 [2024-10-15 01:54:57.003967] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58321' to capture a snapshot of events at runtime. 00:25:48.199 [2024-10-15 01:54:57.003988] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.199 [2024-10-15 01:54:57.004008] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.199 [2024-10-15 01:54:57.004022] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58321 for offline analysis/debug. 00:25:48.199 [2024-10-15 01:54:57.005559] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.133 01:54:57 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.133 01:54:57 rpc -- common/autotest_common.sh@864 -- # return 0 00:25:49.133 01:54:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:25:49.133 01:54:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:25:49.133 01:54:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:25:49.133 01:54:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:25:49.133 01:54:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:49.133 01:54:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:49.133 01:54:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:49.133 ************************************ 00:25:49.133 START TEST rpc_integrity 00:25:49.133 ************************************ 00:25:49.133 01:54:57 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:25:49.133 01:54:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:49.133 01:54:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.133 01:54:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.133 01:54:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.133 01:54:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:25:49.133 01:54:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:25:49.133 01:54:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:25:49.133 01:54:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:25:49.133 01:54:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.133 01:54:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.133 01:54:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.133 01:54:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:25:49.133 01:54:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:25:49.133 01:54:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.133 01:54:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.133 01:54:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.133 01:54:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:25:49.133 { 00:25:49.133 "name": "Malloc0", 00:25:49.133 "aliases": [ 00:25:49.133 "ad682758-74d2-43d9-97e6-fac93401b969" 00:25:49.133 ], 00:25:49.133 "product_name": "Malloc disk", 00:25:49.133 "block_size": 512, 00:25:49.133 "num_blocks": 16384, 00:25:49.133 "uuid": "ad682758-74d2-43d9-97e6-fac93401b969", 00:25:49.133 "assigned_rate_limits": { 00:25:49.133 "rw_ios_per_sec": 0, 00:25:49.133 "rw_mbytes_per_sec": 0, 00:25:49.133 "r_mbytes_per_sec": 0, 00:25:49.133 "w_mbytes_per_sec": 0 00:25:49.133 }, 00:25:49.133 "claimed": false, 00:25:49.133 "zoned": false, 00:25:49.133 "supported_io_types": { 00:25:49.133 "read": true, 00:25:49.133 "write": true, 00:25:49.133 "unmap": true, 00:25:49.133 "flush": true, 00:25:49.133 "reset": true, 00:25:49.133 "nvme_admin": false, 00:25:49.133 "nvme_io": false, 00:25:49.133 "nvme_io_md": false, 00:25:49.133 "write_zeroes": true, 00:25:49.133 "zcopy": true, 00:25:49.133 "get_zone_info": false, 00:25:49.133 "zone_management": false, 00:25:49.133 "zone_append": false, 00:25:49.133 "compare": false, 00:25:49.133 "compare_and_write": false, 00:25:49.133 "abort": true, 00:25:49.133 "seek_hole": false, 00:25:49.133 "seek_data": false, 00:25:49.133 "copy": true, 00:25:49.133 "nvme_iov_md": false 00:25:49.133 }, 00:25:49.133 "memory_domains": [ 00:25:49.133 { 00:25:49.133 "dma_device_id": "system", 00:25:49.133 "dma_device_type": 1 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.133 "dma_device_type": 2 00:25:49.133 } 00:25:49.133 ], 00:25:49.133 "driver_specific": {} 00:25:49.133 } 00:25:49.133 ]' 00:25:49.133 01:54:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:25:49.133 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:25:49.133 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:25:49.133 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.133 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.133 [2024-10-15 01:54:58.042134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:25:49.133 [2024-10-15 01:54:58.042221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:49.133 [2024-10-15 01:54:58.042266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:49.133 [2024-10-15 01:54:58.042287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:49.133 [2024-10-15 01:54:58.045216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:49.133 [2024-10-15 01:54:58.045271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:25:49.133 Passthru0 00:25:49.133 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.133 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:25:49.133 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.133 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.133 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.133 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:25:49.133 { 00:25:49.133 "name": "Malloc0", 00:25:49.133 "aliases": [ 00:25:49.133 "ad682758-74d2-43d9-97e6-fac93401b969" 00:25:49.133 ], 00:25:49.133 "product_name": "Malloc disk", 00:25:49.133 "block_size": 512, 00:25:49.133 "num_blocks": 16384, 00:25:49.133 "uuid": "ad682758-74d2-43d9-97e6-fac93401b969", 00:25:49.133 "assigned_rate_limits": { 00:25:49.133 "rw_ios_per_sec": 0, 00:25:49.133 "rw_mbytes_per_sec": 0, 00:25:49.133 "r_mbytes_per_sec": 0, 00:25:49.133 "w_mbytes_per_sec": 0 00:25:49.133 }, 00:25:49.133 "claimed": true, 00:25:49.133 "claim_type": "exclusive_write", 00:25:49.133 "zoned": false, 00:25:49.133 "supported_io_types": { 00:25:49.133 "read": true, 00:25:49.133 "write": true, 00:25:49.133 "unmap": true, 00:25:49.133 "flush": true, 00:25:49.133 "reset": true, 00:25:49.133 "nvme_admin": false, 00:25:49.133 "nvme_io": false, 00:25:49.133 "nvme_io_md": false, 00:25:49.133 "write_zeroes": true, 00:25:49.133 "zcopy": true, 00:25:49.133 "get_zone_info": false, 00:25:49.133 "zone_management": false, 00:25:49.133 "zone_append": false, 00:25:49.133 "compare": false, 00:25:49.133 "compare_and_write": false, 00:25:49.133 "abort": true, 00:25:49.133 "seek_hole": false, 00:25:49.133 "seek_data": false, 00:25:49.133 "copy": true, 00:25:49.133 "nvme_iov_md": false 00:25:49.133 }, 00:25:49.133 "memory_domains": [ 00:25:49.133 { 00:25:49.133 "dma_device_id": "system", 00:25:49.133 "dma_device_type": 1 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.133 "dma_device_type": 2 00:25:49.133 } 00:25:49.133 ], 00:25:49.133 "driver_specific": {} 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "name": "Passthru0", 00:25:49.133 "aliases": [ 00:25:49.133 "99d6923d-4cb4-5c1b-a49f-d9b5102b6856" 00:25:49.133 ], 00:25:49.133 "product_name": "passthru", 00:25:49.133 "block_size": 512, 00:25:49.133 "num_blocks": 16384, 00:25:49.133 "uuid": "99d6923d-4cb4-5c1b-a49f-d9b5102b6856", 00:25:49.133 "assigned_rate_limits": { 00:25:49.133 "rw_ios_per_sec": 0, 00:25:49.133 "rw_mbytes_per_sec": 0, 00:25:49.133 "r_mbytes_per_sec": 0, 00:25:49.133 "w_mbytes_per_sec": 0 00:25:49.133 }, 00:25:49.133 "claimed": false, 00:25:49.133 "zoned": false, 00:25:49.133 "supported_io_types": { 00:25:49.133 "read": true, 00:25:49.133 "write": true, 00:25:49.133 "unmap": true, 00:25:49.133 "flush": true, 00:25:49.133 "reset": true, 00:25:49.133 "nvme_admin": false, 00:25:49.133 "nvme_io": false, 00:25:49.133 "nvme_io_md": false, 00:25:49.133 "write_zeroes": true, 00:25:49.133 "zcopy": true, 00:25:49.133 "get_zone_info": false, 00:25:49.133 "zone_management": false, 00:25:49.133 "zone_append": false, 00:25:49.133 "compare": false, 00:25:49.133 "compare_and_write": false, 00:25:49.133 "abort": true, 00:25:49.133 "seek_hole": false, 00:25:49.133 "seek_data": false, 00:25:49.133 "copy": true, 00:25:49.133 "nvme_iov_md": false 00:25:49.133 }, 00:25:49.133 "memory_domains": [ 00:25:49.133 { 00:25:49.133 "dma_device_id": "system", 00:25:49.133 "dma_device_type": 1 00:25:49.133 }, 00:25:49.133 { 00:25:49.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.133 "dma_device_type": 2 00:25:49.133 } 00:25:49.133 ], 00:25:49.133 "driver_specific": { 00:25:49.133 "passthru": { 00:25:49.133 "name": "Passthru0", 00:25:49.133 "base_bdev_name": "Malloc0" 00:25:49.133 } 00:25:49.133 } 00:25:49.133 } 00:25:49.133 ]' 00:25:49.133 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:25:49.133 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:25:49.133 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:25:49.133 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.133 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.409 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.409 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:49.409 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.409 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.409 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.409 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:25:49.409 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.409 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.409 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.409 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:25:49.409 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:25:49.409 ************************************ 00:25:49.409 END TEST rpc_integrity 00:25:49.409 ************************************ 00:25:49.409 01:54:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:25:49.409 00:25:49.409 real 0m0.365s 00:25:49.409 user 0m0.215s 00:25:49.409 sys 0m0.043s 00:25:49.409 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:49.409 01:54:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.409 01:54:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:25:49.409 01:54:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:49.409 01:54:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:49.409 01:54:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:49.409 ************************************ 00:25:49.409 START TEST rpc_plugins 00:25:49.409 ************************************ 00:25:49.409 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:25:49.409 01:54:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:25:49.409 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.409 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:49.409 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.409 01:54:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:25:49.410 01:54:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:25:49.410 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.410 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:49.410 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.410 01:54:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:25:49.410 { 00:25:49.410 "name": "Malloc1", 00:25:49.410 "aliases": [ 00:25:49.410 "e524270e-5de0-4431-a4c1-dbf124665a10" 00:25:49.410 ], 00:25:49.410 "product_name": "Malloc disk", 00:25:49.410 "block_size": 4096, 00:25:49.410 "num_blocks": 256, 00:25:49.410 "uuid": "e524270e-5de0-4431-a4c1-dbf124665a10", 00:25:49.410 "assigned_rate_limits": { 00:25:49.410 "rw_ios_per_sec": 0, 00:25:49.410 "rw_mbytes_per_sec": 0, 00:25:49.410 "r_mbytes_per_sec": 0, 00:25:49.410 "w_mbytes_per_sec": 0 00:25:49.410 }, 00:25:49.410 "claimed": false, 00:25:49.410 "zoned": false, 00:25:49.410 "supported_io_types": { 00:25:49.410 "read": true, 00:25:49.410 "write": true, 00:25:49.410 "unmap": true, 00:25:49.410 "flush": true, 00:25:49.410 "reset": true, 00:25:49.410 "nvme_admin": false, 00:25:49.410 "nvme_io": false, 00:25:49.410 "nvme_io_md": false, 00:25:49.410 "write_zeroes": true, 00:25:49.410 "zcopy": true, 00:25:49.410 "get_zone_info": false, 00:25:49.410 "zone_management": false, 00:25:49.410 "zone_append": false, 00:25:49.410 "compare": false, 00:25:49.410 "compare_and_write": false, 00:25:49.410 "abort": true, 00:25:49.410 "seek_hole": false, 00:25:49.410 "seek_data": false, 00:25:49.410 "copy": true, 00:25:49.410 "nvme_iov_md": false 00:25:49.410 }, 00:25:49.410 "memory_domains": [ 00:25:49.410 { 00:25:49.410 "dma_device_id": "system", 00:25:49.410 "dma_device_type": 1 00:25:49.410 }, 00:25:49.410 { 00:25:49.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.410 "dma_device_type": 2 00:25:49.410 } 00:25:49.410 ], 00:25:49.410 "driver_specific": {} 00:25:49.410 } 00:25:49.410 ]' 00:25:49.410 01:54:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:25:49.410 01:54:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:25:49.410 01:54:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:25:49.410 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.410 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:49.410 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.410 01:54:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:25:49.410 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.410 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:49.703 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.703 01:54:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:25:49.703 01:54:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:25:49.703 ************************************ 00:25:49.703 END TEST rpc_plugins 00:25:49.703 ************************************ 00:25:49.704 01:54:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:25:49.704 00:25:49.704 real 0m0.166s 00:25:49.704 user 0m0.105s 00:25:49.704 sys 0m0.020s 00:25:49.704 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:49.704 01:54:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:49.704 01:54:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:25:49.704 01:54:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:49.704 01:54:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:49.704 01:54:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:49.704 ************************************ 00:25:49.704 START TEST rpc_trace_cmd_test 00:25:49.704 ************************************ 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:25:49.704 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58321", 00:25:49.704 "tpoint_group_mask": "0x8", 00:25:49.704 "iscsi_conn": { 00:25:49.704 "mask": "0x2", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "scsi": { 00:25:49.704 "mask": "0x4", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "bdev": { 00:25:49.704 "mask": "0x8", 00:25:49.704 "tpoint_mask": "0xffffffffffffffff" 00:25:49.704 }, 00:25:49.704 "nvmf_rdma": { 00:25:49.704 "mask": "0x10", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "nvmf_tcp": { 00:25:49.704 "mask": "0x20", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "ftl": { 00:25:49.704 "mask": "0x40", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "blobfs": { 00:25:49.704 "mask": "0x80", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "dsa": { 00:25:49.704 "mask": "0x200", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "thread": { 00:25:49.704 "mask": "0x400", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "nvme_pcie": { 00:25:49.704 "mask": "0x800", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "iaa": { 00:25:49.704 "mask": "0x1000", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "nvme_tcp": { 00:25:49.704 "mask": "0x2000", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "bdev_nvme": { 00:25:49.704 "mask": "0x4000", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "sock": { 00:25:49.704 "mask": "0x8000", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "blob": { 00:25:49.704 "mask": "0x10000", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "bdev_raid": { 00:25:49.704 "mask": "0x20000", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 }, 00:25:49.704 "scheduler": { 00:25:49.704 "mask": "0x40000", 00:25:49.704 "tpoint_mask": "0x0" 00:25:49.704 } 00:25:49.704 }' 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:25:49.704 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:25:49.963 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:25:49.963 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:25:49.963 ************************************ 00:25:49.963 END TEST rpc_trace_cmd_test 00:25:49.963 ************************************ 00:25:49.963 01:54:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:25:49.963 00:25:49.963 real 0m0.277s 00:25:49.963 user 0m0.238s 00:25:49.963 sys 0m0.030s 00:25:49.963 01:54:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:49.963 01:54:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.963 01:54:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:25:49.963 01:54:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:25:49.963 01:54:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:25:49.963 01:54:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:49.963 01:54:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:49.963 01:54:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:49.963 ************************************ 00:25:49.963 START TEST rpc_daemon_integrity 00:25:49.963 ************************************ 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:25:49.963 { 00:25:49.963 "name": "Malloc2", 00:25:49.963 "aliases": [ 00:25:49.963 "aeed90a9-cc43-442a-a6af-e5e55e99d43e" 00:25:49.963 ], 00:25:49.963 "product_name": "Malloc disk", 00:25:49.963 "block_size": 512, 00:25:49.963 "num_blocks": 16384, 00:25:49.963 "uuid": "aeed90a9-cc43-442a-a6af-e5e55e99d43e", 00:25:49.963 "assigned_rate_limits": { 00:25:49.963 "rw_ios_per_sec": 0, 00:25:49.963 "rw_mbytes_per_sec": 0, 00:25:49.963 "r_mbytes_per_sec": 0, 00:25:49.963 "w_mbytes_per_sec": 0 00:25:49.963 }, 00:25:49.963 "claimed": false, 00:25:49.963 "zoned": false, 00:25:49.963 "supported_io_types": { 00:25:49.963 "read": true, 00:25:49.963 "write": true, 00:25:49.963 "unmap": true, 00:25:49.963 "flush": true, 00:25:49.963 "reset": true, 00:25:49.963 "nvme_admin": false, 00:25:49.963 "nvme_io": false, 00:25:49.963 "nvme_io_md": false, 00:25:49.963 "write_zeroes": true, 00:25:49.963 "zcopy": true, 00:25:49.963 "get_zone_info": false, 00:25:49.963 "zone_management": false, 00:25:49.963 "zone_append": false, 00:25:49.963 "compare": false, 00:25:49.963 "compare_and_write": false, 00:25:49.963 "abort": true, 00:25:49.963 "seek_hole": false, 00:25:49.963 "seek_data": false, 00:25:49.963 "copy": true, 00:25:49.963 "nvme_iov_md": false 00:25:49.963 }, 00:25:49.963 "memory_domains": [ 00:25:49.963 { 00:25:49.963 "dma_device_id": "system", 00:25:49.963 "dma_device_type": 1 00:25:49.963 }, 00:25:49.963 { 00:25:49.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.963 "dma_device_type": 2 00:25:49.963 } 00:25:49.963 ], 00:25:49.963 "driver_specific": {} 00:25:49.963 } 00:25:49.963 ]' 00:25:49.963 01:54:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:25:50.222 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:25:50.222 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:25:50.222 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.222 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:50.222 [2024-10-15 01:54:59.015535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:25:50.222 [2024-10-15 01:54:59.015615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:50.222 [2024-10-15 01:54:59.015645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:50.222 [2024-10-15 01:54:59.015662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:50.222 [2024-10-15 01:54:59.018539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:50.222 [2024-10-15 01:54:59.018591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:25:50.222 Passthru0 00:25:50.222 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.222 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:25:50.222 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.222 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:50.222 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.222 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:25:50.222 { 00:25:50.222 "name": "Malloc2", 00:25:50.222 "aliases": [ 00:25:50.222 "aeed90a9-cc43-442a-a6af-e5e55e99d43e" 00:25:50.222 ], 00:25:50.222 "product_name": "Malloc disk", 00:25:50.222 "block_size": 512, 00:25:50.222 "num_blocks": 16384, 00:25:50.222 "uuid": "aeed90a9-cc43-442a-a6af-e5e55e99d43e", 00:25:50.222 "assigned_rate_limits": { 00:25:50.222 "rw_ios_per_sec": 0, 00:25:50.222 "rw_mbytes_per_sec": 0, 00:25:50.222 "r_mbytes_per_sec": 0, 00:25:50.222 "w_mbytes_per_sec": 0 00:25:50.222 }, 00:25:50.222 "claimed": true, 00:25:50.222 "claim_type": "exclusive_write", 00:25:50.222 "zoned": false, 00:25:50.222 "supported_io_types": { 00:25:50.222 "read": true, 00:25:50.222 "write": true, 00:25:50.222 "unmap": true, 00:25:50.222 "flush": true, 00:25:50.222 "reset": true, 00:25:50.222 "nvme_admin": false, 00:25:50.222 "nvme_io": false, 00:25:50.222 "nvme_io_md": false, 00:25:50.222 "write_zeroes": true, 00:25:50.222 "zcopy": true, 00:25:50.222 "get_zone_info": false, 00:25:50.222 "zone_management": false, 00:25:50.222 "zone_append": false, 00:25:50.222 "compare": false, 00:25:50.222 "compare_and_write": false, 00:25:50.222 "abort": true, 00:25:50.222 "seek_hole": false, 00:25:50.222 "seek_data": false, 00:25:50.222 "copy": true, 00:25:50.222 "nvme_iov_md": false 00:25:50.222 }, 00:25:50.222 "memory_domains": [ 00:25:50.222 { 00:25:50.222 "dma_device_id": "system", 00:25:50.222 "dma_device_type": 1 00:25:50.222 }, 00:25:50.222 { 00:25:50.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.222 "dma_device_type": 2 00:25:50.222 } 00:25:50.222 ], 00:25:50.222 "driver_specific": {} 00:25:50.222 }, 00:25:50.222 { 00:25:50.222 "name": "Passthru0", 00:25:50.222 "aliases": [ 00:25:50.222 "79ee7c3e-c9bd-5806-9bb2-aedde8147eed" 00:25:50.222 ], 00:25:50.222 "product_name": "passthru", 00:25:50.222 "block_size": 512, 00:25:50.222 "num_blocks": 16384, 00:25:50.222 "uuid": "79ee7c3e-c9bd-5806-9bb2-aedde8147eed", 00:25:50.222 "assigned_rate_limits": { 00:25:50.222 "rw_ios_per_sec": 0, 00:25:50.222 "rw_mbytes_per_sec": 0, 00:25:50.222 "r_mbytes_per_sec": 0, 00:25:50.222 "w_mbytes_per_sec": 0 00:25:50.222 }, 00:25:50.222 "claimed": false, 00:25:50.223 "zoned": false, 00:25:50.223 "supported_io_types": { 00:25:50.223 "read": true, 00:25:50.223 "write": true, 00:25:50.223 "unmap": true, 00:25:50.223 "flush": true, 00:25:50.223 "reset": true, 00:25:50.223 "nvme_admin": false, 00:25:50.223 "nvme_io": false, 00:25:50.223 "nvme_io_md": false, 00:25:50.223 "write_zeroes": true, 00:25:50.223 "zcopy": true, 00:25:50.223 "get_zone_info": false, 00:25:50.223 "zone_management": false, 00:25:50.223 "zone_append": false, 00:25:50.223 "compare": false, 00:25:50.223 "compare_and_write": false, 00:25:50.223 "abort": true, 00:25:50.223 "seek_hole": false, 00:25:50.223 "seek_data": false, 00:25:50.223 "copy": true, 00:25:50.223 "nvme_iov_md": false 00:25:50.223 }, 00:25:50.223 "memory_domains": [ 00:25:50.223 { 00:25:50.223 "dma_device_id": "system", 00:25:50.223 "dma_device_type": 1 00:25:50.223 }, 00:25:50.223 { 00:25:50.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.223 "dma_device_type": 2 00:25:50.223 } 00:25:50.223 ], 00:25:50.223 "driver_specific": { 00:25:50.223 "passthru": { 00:25:50.223 "name": "Passthru0", 00:25:50.223 "base_bdev_name": "Malloc2" 00:25:50.223 } 00:25:50.223 } 00:25:50.223 } 00:25:50.223 ]' 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:25:50.223 ************************************ 00:25:50.223 END TEST rpc_daemon_integrity 00:25:50.223 ************************************ 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:25:50.223 00:25:50.223 real 0m0.357s 00:25:50.223 user 0m0.214s 00:25:50.223 sys 0m0.046s 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:50.223 01:54:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:50.482 01:54:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:50.482 01:54:59 rpc -- rpc/rpc.sh@84 -- # killprocess 58321 00:25:50.482 01:54:59 rpc -- common/autotest_common.sh@950 -- # '[' -z 58321 ']' 00:25:50.482 01:54:59 rpc -- common/autotest_common.sh@954 -- # kill -0 58321 00:25:50.482 01:54:59 rpc -- common/autotest_common.sh@955 -- # uname 00:25:50.482 01:54:59 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:50.482 01:54:59 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58321 00:25:50.482 killing process with pid 58321 00:25:50.482 01:54:59 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:50.482 01:54:59 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:50.482 01:54:59 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58321' 00:25:50.482 01:54:59 rpc -- common/autotest_common.sh@969 -- # kill 58321 00:25:50.482 01:54:59 rpc -- common/autotest_common.sh@974 -- # wait 58321 00:25:53.036 00:25:53.036 real 0m5.420s 00:25:53.036 user 0m6.191s 00:25:53.036 sys 0m0.888s 00:25:53.036 01:55:01 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:53.036 ************************************ 00:25:53.036 END TEST rpc 00:25:53.036 01:55:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:53.036 ************************************ 00:25:53.036 01:55:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:25:53.036 01:55:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:53.036 01:55:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:53.036 01:55:01 -- common/autotest_common.sh@10 -- # set +x 00:25:53.036 ************************************ 00:25:53.036 START TEST skip_rpc 00:25:53.036 ************************************ 00:25:53.036 01:55:01 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:25:53.036 * Looking for test storage... 00:25:53.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:25:53.036 01:55:01 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:53.036 01:55:01 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:25:53.036 01:55:01 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:53.036 01:55:01 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:25:53.036 01:55:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.037 01:55:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:25:53.037 01:55:01 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.037 01:55:01 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:53.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.037 --rc genhtml_branch_coverage=1 00:25:53.037 --rc genhtml_function_coverage=1 00:25:53.037 --rc genhtml_legend=1 00:25:53.037 --rc geninfo_all_blocks=1 00:25:53.037 --rc geninfo_unexecuted_blocks=1 00:25:53.037 00:25:53.037 ' 00:25:53.037 01:55:01 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:53.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.037 --rc genhtml_branch_coverage=1 00:25:53.037 --rc genhtml_function_coverage=1 00:25:53.037 --rc genhtml_legend=1 00:25:53.037 --rc geninfo_all_blocks=1 00:25:53.037 --rc geninfo_unexecuted_blocks=1 00:25:53.037 00:25:53.037 ' 00:25:53.037 01:55:01 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:53.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.037 --rc genhtml_branch_coverage=1 00:25:53.037 --rc genhtml_function_coverage=1 00:25:53.037 --rc genhtml_legend=1 00:25:53.037 --rc geninfo_all_blocks=1 00:25:53.037 --rc geninfo_unexecuted_blocks=1 00:25:53.037 00:25:53.037 ' 00:25:53.037 01:55:01 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:53.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.037 --rc genhtml_branch_coverage=1 00:25:53.037 --rc genhtml_function_coverage=1 00:25:53.037 --rc genhtml_legend=1 00:25:53.037 --rc geninfo_all_blocks=1 00:25:53.037 --rc geninfo_unexecuted_blocks=1 00:25:53.037 00:25:53.037 ' 00:25:53.037 01:55:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:25:53.037 01:55:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:25:53.037 01:55:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:25:53.037 01:55:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:53.037 01:55:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:53.037 01:55:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:53.037 ************************************ 00:25:53.037 START TEST skip_rpc 00:25:53.037 ************************************ 00:25:53.037 01:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:25:53.037 01:55:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58556 00:25:53.037 01:55:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:53.037 01:55:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:25:53.037 01:55:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:25:53.037 [2024-10-15 01:55:02.031712] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:25:53.037 [2024-10-15 01:55:02.031896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58556 ] 00:25:53.295 [2024-10-15 01:55:02.208209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.554 [2024-10-15 01:55:02.488861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58556 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58556 ']' 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58556 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58556 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:58.824 killing process with pid 58556 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58556' 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58556 00:25:58.824 01:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58556 00:26:00.814 00:26:00.814 real 0m7.409s 00:26:00.814 user 0m6.846s 00:26:00.814 sys 0m0.459s 00:26:00.814 ************************************ 00:26:00.814 01:55:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:00.814 01:55:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:00.814 END TEST skip_rpc 00:26:00.814 ************************************ 00:26:00.814 01:55:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:26:00.814 01:55:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:00.814 01:55:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:00.814 01:55:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:00.814 ************************************ 00:26:00.814 START TEST skip_rpc_with_json 00:26:00.814 ************************************ 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58660 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58660 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58660 ']' 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:00.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:00.814 01:55:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:00.814 [2024-10-15 01:55:09.483192] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:00.814 [2024-10-15 01:55:09.483384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58660 ] 00:26:00.814 [2024-10-15 01:55:09.659343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.072 [2024-10-15 01:55:09.900866] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:02.017 [2024-10-15 01:55:10.782775] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:26:02.017 request: 00:26:02.017 { 00:26:02.017 "trtype": "tcp", 00:26:02.017 "method": "nvmf_get_transports", 00:26:02.017 "req_id": 1 00:26:02.017 } 00:26:02.017 Got JSON-RPC error response 00:26:02.017 response: 00:26:02.017 { 00:26:02.017 "code": -19, 00:26:02.017 "message": "No such device" 00:26:02.017 } 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:02.017 [2024-10-15 01:55:10.794877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.017 01:55:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:02.017 { 00:26:02.017 "subsystems": [ 00:26:02.017 { 00:26:02.017 "subsystem": "fsdev", 00:26:02.017 "config": [ 00:26:02.017 { 00:26:02.017 "method": "fsdev_set_opts", 00:26:02.017 "params": { 00:26:02.017 "fsdev_io_pool_size": 65535, 00:26:02.017 "fsdev_io_cache_size": 256 00:26:02.017 } 00:26:02.017 } 00:26:02.017 ] 00:26:02.017 }, 00:26:02.017 { 00:26:02.017 "subsystem": "keyring", 00:26:02.017 "config": [] 00:26:02.017 }, 00:26:02.017 { 00:26:02.017 "subsystem": "iobuf", 00:26:02.017 "config": [ 00:26:02.017 { 00:26:02.017 "method": "iobuf_set_options", 00:26:02.017 "params": { 00:26:02.017 "small_pool_count": 8192, 00:26:02.017 "large_pool_count": 1024, 00:26:02.017 "small_bufsize": 8192, 00:26:02.017 "large_bufsize": 135168 00:26:02.017 } 00:26:02.017 } 00:26:02.017 ] 00:26:02.017 }, 00:26:02.017 { 00:26:02.017 "subsystem": "sock", 00:26:02.017 "config": [ 00:26:02.017 { 00:26:02.017 "method": "sock_set_default_impl", 00:26:02.017 "params": { 00:26:02.017 "impl_name": "posix" 00:26:02.017 } 00:26:02.017 }, 00:26:02.017 { 00:26:02.017 "method": "sock_impl_set_options", 00:26:02.017 "params": { 00:26:02.017 "impl_name": "ssl", 00:26:02.017 "recv_buf_size": 4096, 00:26:02.017 "send_buf_size": 4096, 00:26:02.017 "enable_recv_pipe": true, 00:26:02.017 "enable_quickack": false, 00:26:02.017 "enable_placement_id": 0, 00:26:02.017 "enable_zerocopy_send_server": true, 00:26:02.017 "enable_zerocopy_send_client": false, 00:26:02.017 "zerocopy_threshold": 0, 00:26:02.017 "tls_version": 0, 00:26:02.017 "enable_ktls": false 00:26:02.017 } 00:26:02.017 }, 00:26:02.017 { 00:26:02.017 "method": "sock_impl_set_options", 00:26:02.017 "params": { 00:26:02.017 "impl_name": "posix", 00:26:02.017 "recv_buf_size": 2097152, 00:26:02.017 "send_buf_size": 2097152, 00:26:02.017 "enable_recv_pipe": true, 00:26:02.017 "enable_quickack": false, 00:26:02.017 "enable_placement_id": 0, 00:26:02.017 "enable_zerocopy_send_server": true, 00:26:02.017 "enable_zerocopy_send_client": false, 00:26:02.017 "zerocopy_threshold": 0, 00:26:02.017 "tls_version": 0, 00:26:02.017 "enable_ktls": false 00:26:02.017 } 00:26:02.017 } 00:26:02.017 ] 00:26:02.017 }, 00:26:02.017 { 00:26:02.017 "subsystem": "vmd", 00:26:02.017 "config": [] 00:26:02.017 }, 00:26:02.017 { 00:26:02.017 "subsystem": "accel", 00:26:02.017 "config": [ 00:26:02.017 { 00:26:02.017 "method": "accel_set_options", 00:26:02.017 "params": { 00:26:02.018 "small_cache_size": 128, 00:26:02.018 "large_cache_size": 16, 00:26:02.018 "task_count": 2048, 00:26:02.018 "sequence_count": 2048, 00:26:02.018 "buf_count": 2048 00:26:02.018 } 00:26:02.018 } 00:26:02.018 ] 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "subsystem": "bdev", 00:26:02.018 "config": [ 00:26:02.018 { 00:26:02.018 "method": "bdev_set_options", 00:26:02.018 "params": { 00:26:02.018 "bdev_io_pool_size": 65535, 00:26:02.018 "bdev_io_cache_size": 256, 00:26:02.018 "bdev_auto_examine": true, 00:26:02.018 "iobuf_small_cache_size": 128, 00:26:02.018 "iobuf_large_cache_size": 16 00:26:02.018 } 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "method": "bdev_raid_set_options", 00:26:02.018 "params": { 00:26:02.018 "process_window_size_kb": 1024, 00:26:02.018 "process_max_bandwidth_mb_sec": 0 00:26:02.018 } 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "method": "bdev_iscsi_set_options", 00:26:02.018 "params": { 00:26:02.018 "timeout_sec": 30 00:26:02.018 } 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "method": "bdev_nvme_set_options", 00:26:02.018 "params": { 00:26:02.018 "action_on_timeout": "none", 00:26:02.018 "timeout_us": 0, 00:26:02.018 "timeout_admin_us": 0, 00:26:02.018 "keep_alive_timeout_ms": 10000, 00:26:02.018 "arbitration_burst": 0, 00:26:02.018 "low_priority_weight": 0, 00:26:02.018 "medium_priority_weight": 0, 00:26:02.018 "high_priority_weight": 0, 00:26:02.018 "nvme_adminq_poll_period_us": 10000, 00:26:02.018 "nvme_ioq_poll_period_us": 0, 00:26:02.018 "io_queue_requests": 0, 00:26:02.018 "delay_cmd_submit": true, 00:26:02.018 "transport_retry_count": 4, 00:26:02.018 "bdev_retry_count": 3, 00:26:02.018 "transport_ack_timeout": 0, 00:26:02.018 "ctrlr_loss_timeout_sec": 0, 00:26:02.018 "reconnect_delay_sec": 0, 00:26:02.018 "fast_io_fail_timeout_sec": 0, 00:26:02.018 "disable_auto_failback": false, 00:26:02.018 "generate_uuids": false, 00:26:02.018 "transport_tos": 0, 00:26:02.018 "nvme_error_stat": false, 00:26:02.018 "rdma_srq_size": 0, 00:26:02.018 "io_path_stat": false, 00:26:02.018 "allow_accel_sequence": false, 00:26:02.018 "rdma_max_cq_size": 0, 00:26:02.018 "rdma_cm_event_timeout_ms": 0, 00:26:02.018 "dhchap_digests": [ 00:26:02.018 "sha256", 00:26:02.018 "sha384", 00:26:02.018 "sha512" 00:26:02.018 ], 00:26:02.018 "dhchap_dhgroups": [ 00:26:02.018 "null", 00:26:02.018 "ffdhe2048", 00:26:02.018 "ffdhe3072", 00:26:02.018 "ffdhe4096", 00:26:02.018 "ffdhe6144", 00:26:02.018 "ffdhe8192" 00:26:02.018 ] 00:26:02.018 } 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "method": "bdev_nvme_set_hotplug", 00:26:02.018 "params": { 00:26:02.018 "period_us": 100000, 00:26:02.018 "enable": false 00:26:02.018 } 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "method": "bdev_wait_for_examine" 00:26:02.018 } 00:26:02.018 ] 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "subsystem": "scsi", 00:26:02.018 "config": null 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "subsystem": "scheduler", 00:26:02.018 "config": [ 00:26:02.018 { 00:26:02.018 "method": "framework_set_scheduler", 00:26:02.018 "params": { 00:26:02.018 "name": "static" 00:26:02.018 } 00:26:02.018 } 00:26:02.018 ] 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "subsystem": "vhost_scsi", 00:26:02.018 "config": [] 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "subsystem": "vhost_blk", 00:26:02.018 "config": [] 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "subsystem": "ublk", 00:26:02.018 "config": [] 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "subsystem": "nbd", 00:26:02.018 "config": [] 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "subsystem": "nvmf", 00:26:02.018 "config": [ 00:26:02.018 { 00:26:02.018 "method": "nvmf_set_config", 00:26:02.018 "params": { 00:26:02.018 "discovery_filter": "match_any", 00:26:02.018 "admin_cmd_passthru": { 00:26:02.018 "identify_ctrlr": false 00:26:02.018 }, 00:26:02.018 "dhchap_digests": [ 00:26:02.018 "sha256", 00:26:02.018 "sha384", 00:26:02.018 "sha512" 00:26:02.018 ], 00:26:02.018 "dhchap_dhgroups": [ 00:26:02.018 "null", 00:26:02.018 "ffdhe2048", 00:26:02.018 "ffdhe3072", 00:26:02.018 "ffdhe4096", 00:26:02.018 "ffdhe6144", 00:26:02.018 "ffdhe8192" 00:26:02.018 ] 00:26:02.018 } 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "method": "nvmf_set_max_subsystems", 00:26:02.018 "params": { 00:26:02.018 "max_subsystems": 1024 00:26:02.018 } 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "method": "nvmf_set_crdt", 00:26:02.018 "params": { 00:26:02.018 "crdt1": 0, 00:26:02.018 "crdt2": 0, 00:26:02.018 "crdt3": 0 00:26:02.018 } 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "method": "nvmf_create_transport", 00:26:02.018 "params": { 00:26:02.018 "trtype": "TCP", 00:26:02.018 "max_queue_depth": 128, 00:26:02.018 "max_io_qpairs_per_ctrlr": 127, 00:26:02.018 "in_capsule_data_size": 4096, 00:26:02.018 "max_io_size": 131072, 00:26:02.018 "io_unit_size": 131072, 00:26:02.018 "max_aq_depth": 128, 00:26:02.018 "num_shared_buffers": 511, 00:26:02.018 "buf_cache_size": 4294967295, 00:26:02.018 "dif_insert_or_strip": false, 00:26:02.018 "zcopy": false, 00:26:02.018 "c2h_success": true, 00:26:02.018 "sock_priority": 0, 00:26:02.018 "abort_timeout_sec": 1, 00:26:02.018 "ack_timeout": 0, 00:26:02.018 "data_wr_pool_size": 0 00:26:02.018 } 00:26:02.018 } 00:26:02.018 ] 00:26:02.018 }, 00:26:02.018 { 00:26:02.018 "subsystem": "iscsi", 00:26:02.018 "config": [ 00:26:02.018 { 00:26:02.018 "method": "iscsi_set_options", 00:26:02.018 "params": { 00:26:02.018 "node_base": "iqn.2016-06.io.spdk", 00:26:02.018 "max_sessions": 128, 00:26:02.018 "max_connections_per_session": 2, 00:26:02.018 "max_queue_depth": 64, 00:26:02.018 "default_time2wait": 2, 00:26:02.018 "default_time2retain": 20, 00:26:02.018 "first_burst_length": 8192, 00:26:02.018 "immediate_data": true, 00:26:02.018 "allow_duplicated_isid": false, 00:26:02.018 "error_recovery_level": 0, 00:26:02.018 "nop_timeout": 60, 00:26:02.018 "nop_in_interval": 30, 00:26:02.018 "disable_chap": false, 00:26:02.018 "require_chap": false, 00:26:02.018 "mutual_chap": false, 00:26:02.018 "chap_group": 0, 00:26:02.018 "max_large_datain_per_connection": 64, 00:26:02.018 "max_r2t_per_connection": 4, 00:26:02.018 "pdu_pool_size": 36864, 00:26:02.018 "immediate_data_pool_size": 16384, 00:26:02.018 "data_out_pool_size": 2048 00:26:02.018 } 00:26:02.018 } 00:26:02.018 ] 00:26:02.018 } 00:26:02.018 ] 00:26:02.018 } 00:26:02.018 01:55:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:26:02.018 01:55:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58660 00:26:02.018 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58660 ']' 00:26:02.018 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58660 00:26:02.018 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:26:02.018 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.018 01:55:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58660 00:26:02.018 killing process with pid 58660 00:26:02.018 01:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:02.018 01:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:02.018 01:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58660' 00:26:02.018 01:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58660 00:26:02.018 01:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58660 00:26:04.612 01:55:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58716 00:26:04.612 01:55:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:26:04.612 01:55:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:09.905 01:55:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58716 00:26:09.905 01:55:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58716 ']' 00:26:09.905 01:55:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58716 00:26:09.905 01:55:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:26:09.905 01:55:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:09.905 01:55:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58716 00:26:09.905 killing process with pid 58716 00:26:09.905 01:55:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:09.905 01:55:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:09.905 01:55:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58716' 00:26:09.905 01:55:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58716 00:26:09.905 01:55:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58716 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:12.437 ************************************ 00:26:12.437 END TEST skip_rpc_with_json 00:26:12.437 ************************************ 00:26:12.437 00:26:12.437 real 0m11.528s 00:26:12.437 user 0m10.902s 00:26:12.437 sys 0m1.035s 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:12.437 01:55:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:26:12.437 01:55:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:12.437 01:55:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:12.437 01:55:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:12.437 ************************************ 00:26:12.437 START TEST skip_rpc_with_delay 00:26:12.437 ************************************ 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:26:12.437 01:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:12.437 [2024-10-15 01:55:21.063169] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:26:12.437 [2024-10-15 01:55:21.063354] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:26:12.437 01:55:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:26:12.437 01:55:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:12.437 01:55:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:12.437 01:55:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:12.437 00:26:12.437 real 0m0.203s 00:26:12.437 user 0m0.107s 00:26:12.437 sys 0m0.093s 00:26:12.437 01:55:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:12.437 ************************************ 00:26:12.437 END TEST skip_rpc_with_delay 00:26:12.437 ************************************ 00:26:12.437 01:55:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:26:12.437 01:55:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:26:12.437 01:55:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:26:12.437 01:55:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:26:12.437 01:55:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:12.437 01:55:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:12.437 01:55:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:12.437 ************************************ 00:26:12.437 START TEST exit_on_failed_rpc_init 00:26:12.437 ************************************ 00:26:12.437 01:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:26:12.437 01:55:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58855 00:26:12.437 01:55:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:12.437 01:55:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58855 00:26:12.437 01:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 58855 ']' 00:26:12.437 01:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.437 01:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:12.437 01:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.437 01:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:12.437 01:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:26:12.437 [2024-10-15 01:55:21.313699] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:12.437 [2024-10-15 01:55:21.313912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58855 ] 00:26:12.696 [2024-10-15 01:55:21.490143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.954 [2024-10-15 01:55:21.762291] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:26:13.905 01:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:13.905 [2024-10-15 01:55:22.771160] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:13.905 [2024-10-15 01:55:22.771384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58873 ] 00:26:14.163 [2024-10-15 01:55:22.951912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.421 [2024-10-15 01:55:23.234026] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.421 [2024-10-15 01:55:23.234382] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:14.421 [2024-10-15 01:55:23.234435] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:14.421 [2024-10-15 01:55:23.234456] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58855 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 58855 ']' 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 58855 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:14.679 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58855 00:26:14.938 killing process with pid 58855 00:26:14.938 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:14.938 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:14.938 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58855' 00:26:14.938 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 58855 00:26:14.938 01:55:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 58855 00:26:17.468 00:26:17.468 real 0m4.953s 00:26:17.468 user 0m5.641s 00:26:17.468 sys 0m0.732s 00:26:17.468 01:55:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:17.469 01:55:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 ************************************ 00:26:17.469 END TEST exit_on_failed_rpc_init 00:26:17.469 ************************************ 00:26:17.469 01:55:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:17.469 ************************************ 00:26:17.469 END TEST skip_rpc 00:26:17.469 ************************************ 00:26:17.469 00:26:17.469 real 0m24.476s 00:26:17.469 user 0m23.664s 00:26:17.469 sys 0m2.531s 00:26:17.469 01:55:26 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:17.469 01:55:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 01:55:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:26:17.469 01:55:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:17.469 01:55:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:17.469 01:55:26 -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 ************************************ 00:26:17.469 START TEST rpc_client 00:26:17.469 ************************************ 00:26:17.469 01:55:26 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:26:17.469 * Looking for test storage... 00:26:17.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:26:17.469 01:55:26 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:17.469 01:55:26 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:26:17.469 01:55:26 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:17.469 01:55:26 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.469 01:55:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:26:17.469 01:55:26 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.469 01:55:26 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:17.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.469 --rc genhtml_branch_coverage=1 00:26:17.469 --rc genhtml_function_coverage=1 00:26:17.469 --rc genhtml_legend=1 00:26:17.469 --rc geninfo_all_blocks=1 00:26:17.469 --rc geninfo_unexecuted_blocks=1 00:26:17.469 00:26:17.469 ' 00:26:17.469 01:55:26 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:17.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.469 --rc genhtml_branch_coverage=1 00:26:17.469 --rc genhtml_function_coverage=1 00:26:17.469 --rc genhtml_legend=1 00:26:17.469 --rc geninfo_all_blocks=1 00:26:17.469 --rc geninfo_unexecuted_blocks=1 00:26:17.469 00:26:17.469 ' 00:26:17.469 01:55:26 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:17.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.469 --rc genhtml_branch_coverage=1 00:26:17.469 --rc genhtml_function_coverage=1 00:26:17.469 --rc genhtml_legend=1 00:26:17.469 --rc geninfo_all_blocks=1 00:26:17.469 --rc geninfo_unexecuted_blocks=1 00:26:17.469 00:26:17.469 ' 00:26:17.469 01:55:26 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:17.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.469 --rc genhtml_branch_coverage=1 00:26:17.469 --rc genhtml_function_coverage=1 00:26:17.469 --rc genhtml_legend=1 00:26:17.469 --rc geninfo_all_blocks=1 00:26:17.469 --rc geninfo_unexecuted_blocks=1 00:26:17.469 00:26:17.469 ' 00:26:17.469 01:55:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:26:17.469 OK 00:26:17.728 01:55:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:26:17.728 00:26:17.728 real 0m0.264s 00:26:17.728 user 0m0.164s 00:26:17.728 sys 0m0.107s 00:26:17.728 01:55:26 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:17.728 01:55:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 ************************************ 00:26:17.728 END TEST rpc_client 00:26:17.728 ************************************ 00:26:17.728 01:55:26 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:26:17.728 01:55:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:17.728 01:55:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:17.728 01:55:26 -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 ************************************ 00:26:17.728 START TEST json_config 00:26:17.728 ************************************ 00:26:17.728 01:55:26 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:26:17.728 01:55:26 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:17.728 01:55:26 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:26:17.728 01:55:26 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:17.728 01:55:26 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:17.728 01:55:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.728 01:55:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.728 01:55:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.728 01:55:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.728 01:55:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.728 01:55:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.728 01:55:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.728 01:55:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.728 01:55:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.728 01:55:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.728 01:55:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.728 01:55:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:26:17.728 01:55:26 json_config -- scripts/common.sh@345 -- # : 1 00:26:17.728 01:55:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.728 01:55:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.728 01:55:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:26:17.728 01:55:26 json_config -- scripts/common.sh@353 -- # local d=1 00:26:17.728 01:55:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.728 01:55:26 json_config -- scripts/common.sh@355 -- # echo 1 00:26:17.728 01:55:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.728 01:55:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:26:17.728 01:55:26 json_config -- scripts/common.sh@353 -- # local d=2 00:26:17.728 01:55:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.728 01:55:26 json_config -- scripts/common.sh@355 -- # echo 2 00:26:17.728 01:55:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.728 01:55:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.728 01:55:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.728 01:55:26 json_config -- scripts/common.sh@368 -- # return 0 00:26:17.728 01:55:26 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.728 01:55:26 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:17.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.728 --rc genhtml_branch_coverage=1 00:26:17.728 --rc genhtml_function_coverage=1 00:26:17.728 --rc genhtml_legend=1 00:26:17.728 --rc geninfo_all_blocks=1 00:26:17.728 --rc geninfo_unexecuted_blocks=1 00:26:17.728 00:26:17.728 ' 00:26:17.728 01:55:26 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:17.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.728 --rc genhtml_branch_coverage=1 00:26:17.728 --rc genhtml_function_coverage=1 00:26:17.728 --rc genhtml_legend=1 00:26:17.728 --rc geninfo_all_blocks=1 00:26:17.728 --rc geninfo_unexecuted_blocks=1 00:26:17.729 00:26:17.729 ' 00:26:17.729 01:55:26 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:17.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.729 --rc genhtml_branch_coverage=1 00:26:17.729 --rc genhtml_function_coverage=1 00:26:17.729 --rc genhtml_legend=1 00:26:17.729 --rc geninfo_all_blocks=1 00:26:17.729 --rc geninfo_unexecuted_blocks=1 00:26:17.729 00:26:17.729 ' 00:26:17.729 01:55:26 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:17.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.729 --rc genhtml_branch_coverage=1 00:26:17.729 --rc genhtml_function_coverage=1 00:26:17.729 --rc genhtml_legend=1 00:26:17.729 --rc geninfo_all_blocks=1 00:26:17.729 --rc geninfo_unexecuted_blocks=1 00:26:17.729 00:26:17.729 ' 00:26:17.729 01:55:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96338afd-f13f-4e08-a2c8-83ca5aea5d67 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=96338afd-f13f-4e08-a2c8-83ca5aea5d67 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:17.729 01:55:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.729 01:55:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.729 01:55:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.729 01:55:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.729 01:55:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.729 01:55:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.729 01:55:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.729 01:55:26 json_config -- paths/export.sh@5 -- # export PATH 00:26:17.729 01:55:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@51 -- # : 0 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.729 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.729 01:55:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.729 01:55:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:26:17.729 01:55:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:26:17.729 01:55:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:26:17.729 01:55:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:26:17.729 01:55:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:26:17.729 WARNING: No tests are enabled so not running JSON configuration tests 00:26:17.729 01:55:26 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:26:17.729 01:55:26 json_config -- json_config/json_config.sh@28 -- # exit 0 00:26:17.729 ************************************ 00:26:17.729 END TEST json_config 00:26:17.729 ************************************ 00:26:17.729 00:26:17.729 real 0m0.194s 00:26:17.729 user 0m0.120s 00:26:17.729 sys 0m0.074s 00:26:17.729 01:55:26 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:17.729 01:55:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:18.002 01:55:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:26:18.002 01:55:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:18.002 01:55:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:18.002 01:55:26 -- common/autotest_common.sh@10 -- # set +x 00:26:18.002 ************************************ 00:26:18.002 START TEST json_config_extra_key 00:26:18.002 ************************************ 00:26:18.002 01:55:26 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:26:18.002 01:55:26 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:18.002 01:55:26 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:26:18.002 01:55:26 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:18.002 01:55:26 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:18.002 01:55:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:18.003 01:55:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:26:18.003 01:55:26 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:18.003 01:55:26 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:18.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.003 --rc genhtml_branch_coverage=1 00:26:18.003 --rc genhtml_function_coverage=1 00:26:18.003 --rc genhtml_legend=1 00:26:18.003 --rc geninfo_all_blocks=1 00:26:18.003 --rc geninfo_unexecuted_blocks=1 00:26:18.003 00:26:18.003 ' 00:26:18.003 01:55:26 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:18.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.003 --rc genhtml_branch_coverage=1 00:26:18.003 --rc genhtml_function_coverage=1 00:26:18.003 --rc genhtml_legend=1 00:26:18.003 --rc geninfo_all_blocks=1 00:26:18.003 --rc geninfo_unexecuted_blocks=1 00:26:18.003 00:26:18.003 ' 00:26:18.003 01:55:26 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:18.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.004 --rc genhtml_branch_coverage=1 00:26:18.004 --rc genhtml_function_coverage=1 00:26:18.004 --rc genhtml_legend=1 00:26:18.004 --rc geninfo_all_blocks=1 00:26:18.004 --rc geninfo_unexecuted_blocks=1 00:26:18.004 00:26:18.004 ' 00:26:18.004 01:55:26 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:18.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.004 --rc genhtml_branch_coverage=1 00:26:18.004 --rc genhtml_function_coverage=1 00:26:18.004 --rc genhtml_legend=1 00:26:18.004 --rc geninfo_all_blocks=1 00:26:18.004 --rc geninfo_unexecuted_blocks=1 00:26:18.004 00:26:18.004 ' 00:26:18.004 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:18.004 01:55:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:26:18.004 01:55:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.004 01:55:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.004 01:55:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.004 01:55:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.004 01:55:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.004 01:55:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.004 01:55:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.004 01:55:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.004 01:55:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.004 01:55:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.005 01:55:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96338afd-f13f-4e08-a2c8-83ca5aea5d67 00:26:18.005 01:55:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=96338afd-f13f-4e08-a2c8-83ca5aea5d67 00:26:18.005 01:55:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.005 01:55:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.005 01:55:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:18.005 01:55:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.005 01:55:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:18.005 01:55:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:26:18.005 01:55:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.005 01:55:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.005 01:55:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.005 01:55:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.005 01:55:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.005 01:55:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.005 01:55:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:26:18.006 01:55:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.006 01:55:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:26:18.006 01:55:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:18.006 01:55:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:18.006 01:55:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.006 01:55:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.006 01:55:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.006 01:55:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:18.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:18.006 01:55:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:18.006 01:55:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:18.006 01:55:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:26:18.006 INFO: launching applications... 00:26:18.006 01:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:26:18.006 01:55:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:26:18.006 01:55:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:26:18.006 Waiting for target to run... 00:26:18.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:26:18.006 01:55:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:26:18.006 01:55:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:26:18.006 01:55:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:26:18.006 01:55:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:18.006 01:55:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:18.006 01:55:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59083 00:26:18.006 01:55:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:26:18.006 01:55:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59083 /var/tmp/spdk_tgt.sock 00:26:18.006 01:55:26 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59083 ']' 00:26:18.006 01:55:26 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:26:18.006 01:55:26 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:26:18.006 01:55:26 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.006 01:55:26 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:26:18.006 01:55:26 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.006 01:55:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:26:18.266 [2024-10-15 01:55:27.098792] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:18.266 [2024-10-15 01:55:27.099207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59083 ] 00:26:18.833 [2024-10-15 01:55:27.571011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.833 [2024-10-15 01:55:27.822986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.769 01:55:28 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.769 01:55:28 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:26:19.769 01:55:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:26:19.769 00:26:19.769 01:55:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:26:19.769 INFO: shutting down applications... 00:26:19.769 01:55:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:26:19.769 01:55:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:26:19.769 01:55:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:26:19.769 01:55:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59083 ]] 00:26:19.769 01:55:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59083 00:26:19.769 01:55:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:26:19.769 01:55:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:19.769 01:55:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59083 00:26:19.769 01:55:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:20.027 01:55:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:20.027 01:55:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:20.027 01:55:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59083 00:26:20.027 01:55:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:20.593 01:55:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:20.593 01:55:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:20.593 01:55:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59083 00:26:20.593 01:55:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:21.160 01:55:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:21.160 01:55:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:21.160 01:55:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59083 00:26:21.160 01:55:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:21.727 01:55:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:21.727 01:55:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:21.727 01:55:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59083 00:26:21.727 01:55:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:22.294 01:55:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:22.294 01:55:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:22.294 01:55:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59083 00:26:22.294 01:55:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:22.552 01:55:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:22.552 01:55:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:22.552 01:55:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59083 00:26:22.552 01:55:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:26:22.552 01:55:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:26:22.552 SPDK target shutdown done 00:26:22.552 Success 00:26:22.552 01:55:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:26:22.552 01:55:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:26:22.552 01:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:26:22.552 00:26:22.552 real 0m4.765s 00:26:22.552 user 0m4.193s 00:26:22.552 sys 0m0.648s 00:26:22.552 ************************************ 00:26:22.552 END TEST json_config_extra_key 00:26:22.552 ************************************ 00:26:22.552 01:55:31 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:22.552 01:55:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:26:22.835 01:55:31 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:26:22.835 01:55:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:22.835 01:55:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:22.835 01:55:31 -- common/autotest_common.sh@10 -- # set +x 00:26:22.835 ************************************ 00:26:22.835 START TEST alias_rpc 00:26:22.835 ************************************ 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:26:22.835 * Looking for test storage... 00:26:22.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.835 01:55:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:22.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.835 --rc genhtml_branch_coverage=1 00:26:22.835 --rc genhtml_function_coverage=1 00:26:22.835 --rc genhtml_legend=1 00:26:22.835 --rc geninfo_all_blocks=1 00:26:22.835 --rc geninfo_unexecuted_blocks=1 00:26:22.835 00:26:22.835 ' 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:22.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.835 --rc genhtml_branch_coverage=1 00:26:22.835 --rc genhtml_function_coverage=1 00:26:22.835 --rc genhtml_legend=1 00:26:22.835 --rc geninfo_all_blocks=1 00:26:22.835 --rc geninfo_unexecuted_blocks=1 00:26:22.835 00:26:22.835 ' 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:22.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.835 --rc genhtml_branch_coverage=1 00:26:22.835 --rc genhtml_function_coverage=1 00:26:22.835 --rc genhtml_legend=1 00:26:22.835 --rc geninfo_all_blocks=1 00:26:22.835 --rc geninfo_unexecuted_blocks=1 00:26:22.835 00:26:22.835 ' 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:22.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.835 --rc genhtml_branch_coverage=1 00:26:22.835 --rc genhtml_function_coverage=1 00:26:22.835 --rc genhtml_legend=1 00:26:22.835 --rc geninfo_all_blocks=1 00:26:22.835 --rc geninfo_unexecuted_blocks=1 00:26:22.835 00:26:22.835 ' 00:26:22.835 01:55:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:26:22.835 01:55:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59200 00:26:22.835 01:55:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59200 00:26:22.835 01:55:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59200 ']' 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:22.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:22.835 01:55:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:23.094 [2024-10-15 01:55:31.909341] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:23.094 [2024-10-15 01:55:31.909795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59200 ] 00:26:23.094 [2024-10-15 01:55:32.083113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.352 [2024-10-15 01:55:32.331657] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.287 01:55:33 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:24.287 01:55:33 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:26:24.287 01:55:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:26:24.545 01:55:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59200 00:26:24.545 01:55:33 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59200 ']' 00:26:24.545 01:55:33 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59200 00:26:24.545 01:55:33 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:26:24.545 01:55:33 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:24.545 01:55:33 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59200 00:26:24.809 killing process with pid 59200 00:26:24.809 01:55:33 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:24.809 01:55:33 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:24.809 01:55:33 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59200' 00:26:24.809 01:55:33 alias_rpc -- common/autotest_common.sh@969 -- # kill 59200 00:26:24.809 01:55:33 alias_rpc -- common/autotest_common.sh@974 -- # wait 59200 00:26:27.350 ************************************ 00:26:27.350 END TEST alias_rpc 00:26:27.350 ************************************ 00:26:27.350 00:26:27.350 real 0m4.376s 00:26:27.350 user 0m4.541s 00:26:27.350 sys 0m0.639s 00:26:27.350 01:55:35 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:27.350 01:55:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:27.350 01:55:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:26:27.350 01:55:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:26:27.350 01:55:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:27.350 01:55:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:27.350 01:55:36 -- common/autotest_common.sh@10 -- # set +x 00:26:27.350 ************************************ 00:26:27.350 START TEST spdkcli_tcp 00:26:27.350 ************************************ 00:26:27.350 01:55:36 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:26:27.350 * Looking for test storage... 00:26:27.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:27.350 01:55:36 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.351 01:55:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:27.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.351 --rc genhtml_branch_coverage=1 00:26:27.351 --rc genhtml_function_coverage=1 00:26:27.351 --rc genhtml_legend=1 00:26:27.351 --rc geninfo_all_blocks=1 00:26:27.351 --rc geninfo_unexecuted_blocks=1 00:26:27.351 00:26:27.351 ' 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:27.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.351 --rc genhtml_branch_coverage=1 00:26:27.351 --rc genhtml_function_coverage=1 00:26:27.351 --rc genhtml_legend=1 00:26:27.351 --rc geninfo_all_blocks=1 00:26:27.351 --rc geninfo_unexecuted_blocks=1 00:26:27.351 00:26:27.351 ' 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:27.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.351 --rc genhtml_branch_coverage=1 00:26:27.351 --rc genhtml_function_coverage=1 00:26:27.351 --rc genhtml_legend=1 00:26:27.351 --rc geninfo_all_blocks=1 00:26:27.351 --rc geninfo_unexecuted_blocks=1 00:26:27.351 00:26:27.351 ' 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:27.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.351 --rc genhtml_branch_coverage=1 00:26:27.351 --rc genhtml_function_coverage=1 00:26:27.351 --rc genhtml_legend=1 00:26:27.351 --rc geninfo_all_blocks=1 00:26:27.351 --rc geninfo_unexecuted_blocks=1 00:26:27.351 00:26:27.351 ' 00:26:27.351 01:55:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:27.351 01:55:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:27.351 01:55:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:27.351 01:55:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:26:27.351 01:55:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:26:27.351 01:55:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:27.351 01:55:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:27.351 01:55:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59307 00:26:27.351 01:55:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59307 00:26:27.351 01:55:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59307 ']' 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:27.351 01:55:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:27.351 [2024-10-15 01:55:36.355662] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:27.351 [2024-10-15 01:55:36.356233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:26:27.609 [2024-10-15 01:55:36.538171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:27.868 [2024-10-15 01:55:36.816806] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.868 [2024-10-15 01:55:36.816811] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.841 01:55:37 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:28.841 01:55:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:26:28.841 01:55:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59335 00:26:28.841 01:55:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:26:28.841 01:55:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:26:29.100 [ 00:26:29.100 "bdev_malloc_delete", 00:26:29.100 "bdev_malloc_create", 00:26:29.100 "bdev_null_resize", 00:26:29.100 "bdev_null_delete", 00:26:29.100 "bdev_null_create", 00:26:29.100 "bdev_nvme_cuse_unregister", 00:26:29.100 "bdev_nvme_cuse_register", 00:26:29.100 "bdev_opal_new_user", 00:26:29.100 "bdev_opal_set_lock_state", 00:26:29.100 "bdev_opal_delete", 00:26:29.100 "bdev_opal_get_info", 00:26:29.100 "bdev_opal_create", 00:26:29.100 "bdev_nvme_opal_revert", 00:26:29.100 "bdev_nvme_opal_init", 00:26:29.100 "bdev_nvme_send_cmd", 00:26:29.100 "bdev_nvme_set_keys", 00:26:29.100 "bdev_nvme_get_path_iostat", 00:26:29.100 "bdev_nvme_get_mdns_discovery_info", 00:26:29.100 "bdev_nvme_stop_mdns_discovery", 00:26:29.100 "bdev_nvme_start_mdns_discovery", 00:26:29.100 "bdev_nvme_set_multipath_policy", 00:26:29.100 "bdev_nvme_set_preferred_path", 00:26:29.100 "bdev_nvme_get_io_paths", 00:26:29.100 "bdev_nvme_remove_error_injection", 00:26:29.100 "bdev_nvme_add_error_injection", 00:26:29.100 "bdev_nvme_get_discovery_info", 00:26:29.100 "bdev_nvme_stop_discovery", 00:26:29.100 "bdev_nvme_start_discovery", 00:26:29.100 "bdev_nvme_get_controller_health_info", 00:26:29.100 "bdev_nvme_disable_controller", 00:26:29.100 "bdev_nvme_enable_controller", 00:26:29.100 "bdev_nvme_reset_controller", 00:26:29.100 "bdev_nvme_get_transport_statistics", 00:26:29.100 "bdev_nvme_apply_firmware", 00:26:29.100 "bdev_nvme_detach_controller", 00:26:29.100 "bdev_nvme_get_controllers", 00:26:29.100 "bdev_nvme_attach_controller", 00:26:29.100 "bdev_nvme_set_hotplug", 00:26:29.100 "bdev_nvme_set_options", 00:26:29.100 "bdev_passthru_delete", 00:26:29.100 "bdev_passthru_create", 00:26:29.100 "bdev_lvol_set_parent_bdev", 00:26:29.100 "bdev_lvol_set_parent", 00:26:29.100 "bdev_lvol_check_shallow_copy", 00:26:29.100 "bdev_lvol_start_shallow_copy", 00:26:29.100 "bdev_lvol_grow_lvstore", 00:26:29.100 "bdev_lvol_get_lvols", 00:26:29.100 "bdev_lvol_get_lvstores", 00:26:29.100 "bdev_lvol_delete", 00:26:29.100 "bdev_lvol_set_read_only", 00:26:29.100 "bdev_lvol_resize", 00:26:29.100 "bdev_lvol_decouple_parent", 00:26:29.100 "bdev_lvol_inflate", 00:26:29.100 "bdev_lvol_rename", 00:26:29.100 "bdev_lvol_clone_bdev", 00:26:29.100 "bdev_lvol_clone", 00:26:29.100 "bdev_lvol_snapshot", 00:26:29.100 "bdev_lvol_create", 00:26:29.100 "bdev_lvol_delete_lvstore", 00:26:29.100 "bdev_lvol_rename_lvstore", 00:26:29.100 "bdev_lvol_create_lvstore", 00:26:29.100 "bdev_raid_set_options", 00:26:29.100 "bdev_raid_remove_base_bdev", 00:26:29.100 "bdev_raid_add_base_bdev", 00:26:29.100 "bdev_raid_delete", 00:26:29.100 "bdev_raid_create", 00:26:29.100 "bdev_raid_get_bdevs", 00:26:29.100 "bdev_error_inject_error", 00:26:29.100 "bdev_error_delete", 00:26:29.100 "bdev_error_create", 00:26:29.100 "bdev_split_delete", 00:26:29.100 "bdev_split_create", 00:26:29.100 "bdev_delay_delete", 00:26:29.100 "bdev_delay_create", 00:26:29.100 "bdev_delay_update_latency", 00:26:29.100 "bdev_zone_block_delete", 00:26:29.100 "bdev_zone_block_create", 00:26:29.100 "blobfs_create", 00:26:29.100 "blobfs_detect", 00:26:29.100 "blobfs_set_cache_size", 00:26:29.100 "bdev_xnvme_delete", 00:26:29.100 "bdev_xnvme_create", 00:26:29.100 "bdev_aio_delete", 00:26:29.100 "bdev_aio_rescan", 00:26:29.100 "bdev_aio_create", 00:26:29.100 "bdev_ftl_set_property", 00:26:29.100 "bdev_ftl_get_properties", 00:26:29.100 "bdev_ftl_get_stats", 00:26:29.100 "bdev_ftl_unmap", 00:26:29.100 "bdev_ftl_unload", 00:26:29.100 "bdev_ftl_delete", 00:26:29.100 "bdev_ftl_load", 00:26:29.100 "bdev_ftl_create", 00:26:29.100 "bdev_virtio_attach_controller", 00:26:29.100 "bdev_virtio_scsi_get_devices", 00:26:29.100 "bdev_virtio_detach_controller", 00:26:29.100 "bdev_virtio_blk_set_hotplug", 00:26:29.100 "bdev_iscsi_delete", 00:26:29.100 "bdev_iscsi_create", 00:26:29.100 "bdev_iscsi_set_options", 00:26:29.100 "accel_error_inject_error", 00:26:29.100 "ioat_scan_accel_module", 00:26:29.100 "dsa_scan_accel_module", 00:26:29.100 "iaa_scan_accel_module", 00:26:29.100 "keyring_file_remove_key", 00:26:29.100 "keyring_file_add_key", 00:26:29.101 "keyring_linux_set_options", 00:26:29.101 "fsdev_aio_delete", 00:26:29.101 "fsdev_aio_create", 00:26:29.101 "iscsi_get_histogram", 00:26:29.101 "iscsi_enable_histogram", 00:26:29.101 "iscsi_set_options", 00:26:29.101 "iscsi_get_auth_groups", 00:26:29.101 "iscsi_auth_group_remove_secret", 00:26:29.101 "iscsi_auth_group_add_secret", 00:26:29.101 "iscsi_delete_auth_group", 00:26:29.101 "iscsi_create_auth_group", 00:26:29.101 "iscsi_set_discovery_auth", 00:26:29.101 "iscsi_get_options", 00:26:29.101 "iscsi_target_node_request_logout", 00:26:29.101 "iscsi_target_node_set_redirect", 00:26:29.101 "iscsi_target_node_set_auth", 00:26:29.101 "iscsi_target_node_add_lun", 00:26:29.101 "iscsi_get_stats", 00:26:29.101 "iscsi_get_connections", 00:26:29.101 "iscsi_portal_group_set_auth", 00:26:29.101 "iscsi_start_portal_group", 00:26:29.101 "iscsi_delete_portal_group", 00:26:29.101 "iscsi_create_portal_group", 00:26:29.101 "iscsi_get_portal_groups", 00:26:29.101 "iscsi_delete_target_node", 00:26:29.101 "iscsi_target_node_remove_pg_ig_maps", 00:26:29.101 "iscsi_target_node_add_pg_ig_maps", 00:26:29.101 "iscsi_create_target_node", 00:26:29.101 "iscsi_get_target_nodes", 00:26:29.101 "iscsi_delete_initiator_group", 00:26:29.101 "iscsi_initiator_group_remove_initiators", 00:26:29.101 "iscsi_initiator_group_add_initiators", 00:26:29.101 "iscsi_create_initiator_group", 00:26:29.101 "iscsi_get_initiator_groups", 00:26:29.101 "nvmf_set_crdt", 00:26:29.101 "nvmf_set_config", 00:26:29.101 "nvmf_set_max_subsystems", 00:26:29.101 "nvmf_stop_mdns_prr", 00:26:29.101 "nvmf_publish_mdns_prr", 00:26:29.101 "nvmf_subsystem_get_listeners", 00:26:29.101 "nvmf_subsystem_get_qpairs", 00:26:29.101 "nvmf_subsystem_get_controllers", 00:26:29.101 "nvmf_get_stats", 00:26:29.101 "nvmf_get_transports", 00:26:29.101 "nvmf_create_transport", 00:26:29.101 "nvmf_get_targets", 00:26:29.101 "nvmf_delete_target", 00:26:29.101 "nvmf_create_target", 00:26:29.101 "nvmf_subsystem_allow_any_host", 00:26:29.101 "nvmf_subsystem_set_keys", 00:26:29.101 "nvmf_subsystem_remove_host", 00:26:29.101 "nvmf_subsystem_add_host", 00:26:29.101 "nvmf_ns_remove_host", 00:26:29.101 "nvmf_ns_add_host", 00:26:29.101 "nvmf_subsystem_remove_ns", 00:26:29.101 "nvmf_subsystem_set_ns_ana_group", 00:26:29.101 "nvmf_subsystem_add_ns", 00:26:29.101 "nvmf_subsystem_listener_set_ana_state", 00:26:29.101 "nvmf_discovery_get_referrals", 00:26:29.101 "nvmf_discovery_remove_referral", 00:26:29.101 "nvmf_discovery_add_referral", 00:26:29.101 "nvmf_subsystem_remove_listener", 00:26:29.101 "nvmf_subsystem_add_listener", 00:26:29.101 "nvmf_delete_subsystem", 00:26:29.101 "nvmf_create_subsystem", 00:26:29.101 "nvmf_get_subsystems", 00:26:29.101 "env_dpdk_get_mem_stats", 00:26:29.101 "nbd_get_disks", 00:26:29.101 "nbd_stop_disk", 00:26:29.101 "nbd_start_disk", 00:26:29.101 "ublk_recover_disk", 00:26:29.101 "ublk_get_disks", 00:26:29.101 "ublk_stop_disk", 00:26:29.101 "ublk_start_disk", 00:26:29.101 "ublk_destroy_target", 00:26:29.101 "ublk_create_target", 00:26:29.101 "virtio_blk_create_transport", 00:26:29.101 "virtio_blk_get_transports", 00:26:29.101 "vhost_controller_set_coalescing", 00:26:29.101 "vhost_get_controllers", 00:26:29.101 "vhost_delete_controller", 00:26:29.101 "vhost_create_blk_controller", 00:26:29.101 "vhost_scsi_controller_remove_target", 00:26:29.101 "vhost_scsi_controller_add_target", 00:26:29.101 "vhost_start_scsi_controller", 00:26:29.101 "vhost_create_scsi_controller", 00:26:29.101 "thread_set_cpumask", 00:26:29.101 "scheduler_set_options", 00:26:29.101 "framework_get_governor", 00:26:29.101 "framework_get_scheduler", 00:26:29.101 "framework_set_scheduler", 00:26:29.101 "framework_get_reactors", 00:26:29.101 "thread_get_io_channels", 00:26:29.101 "thread_get_pollers", 00:26:29.101 "thread_get_stats", 00:26:29.101 "framework_monitor_context_switch", 00:26:29.101 "spdk_kill_instance", 00:26:29.101 "log_enable_timestamps", 00:26:29.101 "log_get_flags", 00:26:29.101 "log_clear_flag", 00:26:29.101 "log_set_flag", 00:26:29.101 "log_get_level", 00:26:29.101 "log_set_level", 00:26:29.101 "log_get_print_level", 00:26:29.101 "log_set_print_level", 00:26:29.101 "framework_enable_cpumask_locks", 00:26:29.101 "framework_disable_cpumask_locks", 00:26:29.101 "framework_wait_init", 00:26:29.101 "framework_start_init", 00:26:29.101 "scsi_get_devices", 00:26:29.101 "bdev_get_histogram", 00:26:29.101 "bdev_enable_histogram", 00:26:29.101 "bdev_set_qos_limit", 00:26:29.101 "bdev_set_qd_sampling_period", 00:26:29.101 "bdev_get_bdevs", 00:26:29.101 "bdev_reset_iostat", 00:26:29.101 "bdev_get_iostat", 00:26:29.101 "bdev_examine", 00:26:29.101 "bdev_wait_for_examine", 00:26:29.101 "bdev_set_options", 00:26:29.101 "accel_get_stats", 00:26:29.101 "accel_set_options", 00:26:29.101 "accel_set_driver", 00:26:29.101 "accel_crypto_key_destroy", 00:26:29.101 "accel_crypto_keys_get", 00:26:29.101 "accel_crypto_key_create", 00:26:29.101 "accel_assign_opc", 00:26:29.101 "accel_get_module_info", 00:26:29.101 "accel_get_opc_assignments", 00:26:29.101 "vmd_rescan", 00:26:29.101 "vmd_remove_device", 00:26:29.101 "vmd_enable", 00:26:29.101 "sock_get_default_impl", 00:26:29.101 "sock_set_default_impl", 00:26:29.101 "sock_impl_set_options", 00:26:29.101 "sock_impl_get_options", 00:26:29.101 "iobuf_get_stats", 00:26:29.101 "iobuf_set_options", 00:26:29.101 "keyring_get_keys", 00:26:29.101 "framework_get_pci_devices", 00:26:29.101 "framework_get_config", 00:26:29.101 "framework_get_subsystems", 00:26:29.101 "fsdev_set_opts", 00:26:29.101 "fsdev_get_opts", 00:26:29.101 "trace_get_info", 00:26:29.101 "trace_get_tpoint_group_mask", 00:26:29.101 "trace_disable_tpoint_group", 00:26:29.101 "trace_enable_tpoint_group", 00:26:29.101 "trace_clear_tpoint_mask", 00:26:29.101 "trace_set_tpoint_mask", 00:26:29.101 "notify_get_notifications", 00:26:29.101 "notify_get_types", 00:26:29.101 "spdk_get_version", 00:26:29.101 "rpc_get_methods" 00:26:29.101 ] 00:26:29.101 01:55:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:26:29.101 01:55:37 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:29.101 01:55:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:29.101 01:55:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:29.101 01:55:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59307 00:26:29.101 01:55:38 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59307 ']' 00:26:29.101 01:55:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59307 00:26:29.102 01:55:38 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:26:29.102 01:55:38 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:29.102 01:55:38 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59307 00:26:29.102 killing process with pid 59307 00:26:29.102 01:55:38 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:29.102 01:55:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:29.102 01:55:38 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59307' 00:26:29.102 01:55:38 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59307 00:26:29.102 01:55:38 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59307 00:26:31.630 ************************************ 00:26:31.630 END TEST spdkcli_tcp 00:26:31.630 ************************************ 00:26:31.630 00:26:31.630 real 0m4.395s 00:26:31.630 user 0m7.671s 00:26:31.630 sys 0m0.684s 00:26:31.630 01:55:40 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:31.630 01:55:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.630 01:55:40 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:26:31.630 01:55:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:31.630 01:55:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:31.630 01:55:40 -- common/autotest_common.sh@10 -- # set +x 00:26:31.630 ************************************ 00:26:31.630 START TEST dpdk_mem_utility 00:26:31.630 ************************************ 00:26:31.630 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:26:31.630 * Looking for test storage... 00:26:31.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:26:31.630 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:31.630 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:26:31.630 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:31.630 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:26:31.630 01:55:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:26:31.889 01:55:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.889 01:55:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:26:31.889 01:55:40 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.889 01:55:40 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.889 01:55:40 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.889 01:55:40 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:26:31.889 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.889 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:31.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.889 --rc genhtml_branch_coverage=1 00:26:31.889 --rc genhtml_function_coverage=1 00:26:31.889 --rc genhtml_legend=1 00:26:31.889 --rc geninfo_all_blocks=1 00:26:31.889 --rc geninfo_unexecuted_blocks=1 00:26:31.889 00:26:31.889 ' 00:26:31.889 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:31.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.889 --rc genhtml_branch_coverage=1 00:26:31.889 --rc genhtml_function_coverage=1 00:26:31.889 --rc genhtml_legend=1 00:26:31.889 --rc geninfo_all_blocks=1 00:26:31.889 --rc geninfo_unexecuted_blocks=1 00:26:31.889 00:26:31.889 ' 00:26:31.889 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:31.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.889 --rc genhtml_branch_coverage=1 00:26:31.889 --rc genhtml_function_coverage=1 00:26:31.889 --rc genhtml_legend=1 00:26:31.889 --rc geninfo_all_blocks=1 00:26:31.889 --rc geninfo_unexecuted_blocks=1 00:26:31.889 00:26:31.889 ' 00:26:31.889 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:31.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.889 --rc genhtml_branch_coverage=1 00:26:31.889 --rc genhtml_function_coverage=1 00:26:31.889 --rc genhtml_legend=1 00:26:31.889 --rc geninfo_all_blocks=1 00:26:31.889 --rc geninfo_unexecuted_blocks=1 00:26:31.889 00:26:31.889 ' 00:26:31.889 01:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:26:31.889 01:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59435 00:26:31.889 01:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59435 00:26:31.889 01:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:31.889 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59435 ']' 00:26:31.889 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.889 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:31.889 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.889 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:31.889 01:55:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:31.889 [2024-10-15 01:55:40.775990] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:31.889 [2024-10-15 01:55:40.776399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59435 ] 00:26:32.184 [2024-10-15 01:55:40.954300] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.441 [2024-10-15 01:55:41.230429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.376 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:33.376 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:26:33.376 01:55:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:26:33.376 01:55:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:26:33.376 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.376 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:33.376 { 00:26:33.376 "filename": "/tmp/spdk_mem_dump.txt" 00:26:33.376 } 00:26:33.376 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.376 01:55:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:26:33.376 DPDK memory size 866.000000 MiB in 1 heap(s) 00:26:33.376 1 heaps totaling size 866.000000 MiB 00:26:33.376 size: 866.000000 MiB heap id: 0 00:26:33.376 end heaps---------- 00:26:33.376 9 mempools totaling size 642.649841 MiB 00:26:33.376 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:26:33.376 size: 158.602051 MiB name: PDU_data_out_Pool 00:26:33.376 size: 92.545471 MiB name: bdev_io_59435 00:26:33.376 size: 51.011292 MiB name: evtpool_59435 00:26:33.376 size: 50.003479 MiB name: msgpool_59435 00:26:33.376 size: 36.509338 MiB name: fsdev_io_59435 00:26:33.376 size: 21.763794 MiB name: PDU_Pool 00:26:33.376 size: 19.513306 MiB name: SCSI_TASK_Pool 00:26:33.376 size: 0.026123 MiB name: Session_Pool 00:26:33.376 end mempools------- 00:26:33.376 6 memzones totaling size 4.142822 MiB 00:26:33.376 size: 1.000366 MiB name: RG_ring_0_59435 00:26:33.376 size: 1.000366 MiB name: RG_ring_1_59435 00:26:33.376 size: 1.000366 MiB name: RG_ring_4_59435 00:26:33.376 size: 1.000366 MiB name: RG_ring_5_59435 00:26:33.376 size: 0.125366 MiB name: RG_ring_2_59435 00:26:33.376 size: 0.015991 MiB name: RG_ring_3_59435 00:26:33.376 end memzones------- 00:26:33.376 01:55:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:26:33.376 heap id: 0 total size: 866.000000 MiB number of busy elements: 310 number of free elements: 19 00:26:33.376 list of free elements. size: 19.914795 MiB 00:26:33.376 element at address: 0x200000400000 with size: 1.999451 MiB 00:26:33.376 element at address: 0x200000800000 with size: 1.996887 MiB 00:26:33.376 element at address: 0x200009600000 with size: 1.995972 MiB 00:26:33.376 element at address: 0x20000d800000 with size: 1.995972 MiB 00:26:33.376 element at address: 0x200007000000 with size: 1.991028 MiB 00:26:33.376 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:26:33.376 element at address: 0x20001c300040 with size: 0.999939 MiB 00:26:33.376 element at address: 0x20001c400000 with size: 0.999084 MiB 00:26:33.376 element at address: 0x200035000000 with size: 0.994324 MiB 00:26:33.376 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:26:33.376 element at address: 0x20001c700040 with size: 0.936401 MiB 00:26:33.376 element at address: 0x200000200000 with size: 0.832153 MiB 00:26:33.376 element at address: 0x20001de00000 with size: 0.562195 MiB 00:26:33.376 element at address: 0x200003e00000 with size: 0.490662 MiB 00:26:33.376 element at address: 0x20001c000000 with size: 0.488953 MiB 00:26:33.376 element at address: 0x20001c800000 with size: 0.485413 MiB 00:26:33.376 element at address: 0x200015e00000 with size: 0.443237 MiB 00:26:33.376 element at address: 0x20002b200000 with size: 0.390442 MiB 00:26:33.376 element at address: 0x200003a00000 with size: 0.353088 MiB 00:26:33.376 list of standard malloc elements. size: 199.286499 MiB 00:26:33.376 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:26:33.376 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:26:33.376 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:26:33.376 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:26:33.376 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:26:33.376 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:26:33.377 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:26:33.377 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:26:33.377 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:26:33.377 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:26:33.377 element at address: 0x200015dff040 with size: 0.000305 MiB 00:26:33.377 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003aff800 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003affa80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003efef00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200003eff000 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dff180 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dff280 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dff380 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dff480 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dff580 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dff680 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dff780 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dff880 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dff980 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015e71780 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015e71880 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015e71980 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015e72080 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015e72180 with size: 0.000244 MiB 00:26:33.377 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:26:33.377 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b264040 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:26:33.378 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:26:33.379 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:26:33.379 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:26:33.379 list of memzone associated elements. size: 646.798706 MiB 00:26:33.379 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:26:33.379 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:26:33.379 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:26:33.379 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:26:33.379 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:26:33.379 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59435_0 00:26:33.379 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:26:33.379 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59435_0 00:26:33.379 element at address: 0x200003fff340 with size: 48.003113 MiB 00:26:33.379 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59435_0 00:26:33.379 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:26:33.379 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59435_0 00:26:33.379 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:26:33.379 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:26:33.379 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:26:33.379 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:26:33.379 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:26:33.379 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59435 00:26:33.379 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:26:33.379 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59435 00:26:33.379 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:26:33.379 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59435 00:26:33.379 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:26:33.379 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:26:33.379 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:26:33.379 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:26:33.379 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:26:33.379 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:26:33.379 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:26:33.379 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:26:33.379 element at address: 0x200003eff100 with size: 1.000549 MiB 00:26:33.379 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59435 00:26:33.379 element at address: 0x200003affb80 with size: 1.000549 MiB 00:26:33.379 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59435 00:26:33.379 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:26:33.379 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59435 00:26:33.379 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:26:33.379 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59435 00:26:33.379 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:26:33.379 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59435 00:26:33.379 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:26:33.379 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59435 00:26:33.379 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:26:33.379 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:26:33.379 element at address: 0x200015e72280 with size: 0.500549 MiB 00:26:33.379 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:26:33.379 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:26:33.379 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:26:33.379 element at address: 0x200003a5e880 with size: 0.125549 MiB 00:26:33.379 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59435 00:26:33.379 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:26:33.379 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:26:33.379 element at address: 0x20002b264140 with size: 0.023804 MiB 00:26:33.379 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:26:33.379 element at address: 0x200003a5a640 with size: 0.016174 MiB 00:26:33.379 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59435 00:26:33.379 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:26:33.379 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:26:33.379 element at address: 0x2000002d6180 with size: 0.000366 MiB 00:26:33.379 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59435 00:26:33.379 element at address: 0x200003aff900 with size: 0.000366 MiB 00:26:33.379 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59435 00:26:33.379 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:26:33.379 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59435 00:26:33.379 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:26:33.379 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:26:33.379 01:55:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:26:33.379 01:55:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59435 00:26:33.379 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59435 ']' 00:26:33.379 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59435 00:26:33.379 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:26:33.379 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:33.379 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59435 00:26:33.379 killing process with pid 59435 00:26:33.379 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:33.379 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:33.379 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59435' 00:26:33.379 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59435 00:26:33.379 01:55:42 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59435 00:26:35.909 ************************************ 00:26:35.909 END TEST dpdk_mem_utility 00:26:35.909 ************************************ 00:26:35.909 00:26:35.909 real 0m4.163s 00:26:35.909 user 0m4.169s 00:26:35.909 sys 0m0.605s 00:26:35.909 01:55:44 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:35.909 01:55:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:35.909 01:55:44 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:26:35.909 01:55:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:35.909 01:55:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:35.909 01:55:44 -- common/autotest_common.sh@10 -- # set +x 00:26:35.909 ************************************ 00:26:35.909 START TEST event 00:26:35.909 ************************************ 00:26:35.909 01:55:44 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:26:35.909 * Looking for test storage... 00:26:35.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:26:35.909 01:55:44 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:35.909 01:55:44 event -- common/autotest_common.sh@1681 -- # lcov --version 00:26:35.909 01:55:44 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:35.909 01:55:44 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:35.909 01:55:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.909 01:55:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.909 01:55:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.909 01:55:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.909 01:55:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.909 01:55:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.909 01:55:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.909 01:55:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.909 01:55:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.909 01:55:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.910 01:55:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.910 01:55:44 event -- scripts/common.sh@344 -- # case "$op" in 00:26:35.910 01:55:44 event -- scripts/common.sh@345 -- # : 1 00:26:35.910 01:55:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.910 01:55:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.910 01:55:44 event -- scripts/common.sh@365 -- # decimal 1 00:26:35.910 01:55:44 event -- scripts/common.sh@353 -- # local d=1 00:26:35.910 01:55:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.910 01:55:44 event -- scripts/common.sh@355 -- # echo 1 00:26:35.910 01:55:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.910 01:55:44 event -- scripts/common.sh@366 -- # decimal 2 00:26:35.910 01:55:44 event -- scripts/common.sh@353 -- # local d=2 00:26:35.910 01:55:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.910 01:55:44 event -- scripts/common.sh@355 -- # echo 2 00:26:35.910 01:55:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.910 01:55:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.910 01:55:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.910 01:55:44 event -- scripts/common.sh@368 -- # return 0 00:26:35.910 01:55:44 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.910 01:55:44 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:35.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.910 --rc genhtml_branch_coverage=1 00:26:35.910 --rc genhtml_function_coverage=1 00:26:35.910 --rc genhtml_legend=1 00:26:35.910 --rc geninfo_all_blocks=1 00:26:35.910 --rc geninfo_unexecuted_blocks=1 00:26:35.910 00:26:35.910 ' 00:26:35.910 01:55:44 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:35.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.910 --rc genhtml_branch_coverage=1 00:26:35.910 --rc genhtml_function_coverage=1 00:26:35.910 --rc genhtml_legend=1 00:26:35.910 --rc geninfo_all_blocks=1 00:26:35.910 --rc geninfo_unexecuted_blocks=1 00:26:35.910 00:26:35.910 ' 00:26:35.910 01:55:44 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:35.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.910 --rc genhtml_branch_coverage=1 00:26:35.910 --rc genhtml_function_coverage=1 00:26:35.910 --rc genhtml_legend=1 00:26:35.910 --rc geninfo_all_blocks=1 00:26:35.910 --rc geninfo_unexecuted_blocks=1 00:26:35.910 00:26:35.910 ' 00:26:35.910 01:55:44 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:35.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.910 --rc genhtml_branch_coverage=1 00:26:35.910 --rc genhtml_function_coverage=1 00:26:35.910 --rc genhtml_legend=1 00:26:35.910 --rc geninfo_all_blocks=1 00:26:35.910 --rc geninfo_unexecuted_blocks=1 00:26:35.910 00:26:35.910 ' 00:26:35.910 01:55:44 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:35.910 01:55:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:26:35.910 01:55:44 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:26:35.910 01:55:44 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:26:35.910 01:55:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:35.910 01:55:44 event -- common/autotest_common.sh@10 -- # set +x 00:26:36.168 ************************************ 00:26:36.168 START TEST event_perf 00:26:36.168 ************************************ 00:26:36.168 01:55:44 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:26:36.168 Running I/O for 1 seconds...[2024-10-15 01:55:44.970364] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:36.168 [2024-10-15 01:55:44.970687] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59548 ] 00:26:36.168 [2024-10-15 01:55:45.148235] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:36.511 [2024-10-15 01:55:45.397540] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.511 [2024-10-15 01:55:45.397634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.511 [2024-10-15 01:55:45.397705] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.511 Running I/O for 1 seconds...[2024-10-15 01:55:45.397720] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:37.885 00:26:37.886 lcore 0: 194613 00:26:37.886 lcore 1: 194613 00:26:37.886 lcore 2: 194611 00:26:37.886 lcore 3: 194610 00:26:37.886 done. 00:26:37.886 00:26:37.886 ************************************ 00:26:37.886 END TEST event_perf 00:26:37.886 ************************************ 00:26:37.886 real 0m1.858s 00:26:37.886 user 0m4.592s 00:26:37.886 sys 0m0.142s 00:26:37.886 01:55:46 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:37.886 01:55:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:26:37.886 01:55:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:26:37.886 01:55:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:37.886 01:55:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:37.886 01:55:46 event -- common/autotest_common.sh@10 -- # set +x 00:26:37.886 ************************************ 00:26:37.886 START TEST event_reactor 00:26:37.886 ************************************ 00:26:37.886 01:55:46 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:26:37.886 [2024-10-15 01:55:46.875065] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:37.886 [2024-10-15 01:55:46.875371] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59588 ] 00:26:38.144 [2024-10-15 01:55:47.045002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.408 [2024-10-15 01:55:47.339719] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.797 test_start 00:26:39.797 oneshot 00:26:39.797 tick 100 00:26:39.797 tick 100 00:26:39.797 tick 250 00:26:39.797 tick 100 00:26:39.797 tick 100 00:26:39.797 tick 100 00:26:39.797 tick 250 00:26:39.797 tick 500 00:26:39.797 tick 100 00:26:39.797 tick 100 00:26:39.797 tick 250 00:26:39.797 tick 100 00:26:39.797 tick 100 00:26:39.797 test_end 00:26:39.797 00:26:39.797 real 0m1.911s 00:26:39.797 user 0m1.671s 00:26:39.797 sys 0m0.127s 00:26:39.797 01:55:48 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:39.797 ************************************ 00:26:39.797 END TEST event_reactor 00:26:39.797 ************************************ 00:26:39.797 01:55:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:26:39.797 01:55:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:26:39.797 01:55:48 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:39.797 01:55:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:39.797 01:55:48 event -- common/autotest_common.sh@10 -- # set +x 00:26:39.797 ************************************ 00:26:39.797 START TEST event_reactor_perf 00:26:39.797 ************************************ 00:26:39.797 01:55:48 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:26:40.056 [2024-10-15 01:55:48.840779] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:40.056 [2024-10-15 01:55:48.840927] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59630 ] 00:26:40.056 [2024-10-15 01:55:49.001847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.313 [2024-10-15 01:55:49.239562] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.687 test_start 00:26:41.687 test_end 00:26:41.687 Performance: 282985 events per second 00:26:41.687 00:26:41.687 real 0m1.853s 00:26:41.687 user 0m1.638s 00:26:41.687 sys 0m0.105s 00:26:41.687 01:55:50 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:41.687 01:55:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:26:41.687 ************************************ 00:26:41.687 END TEST event_reactor_perf 00:26:41.687 ************************************ 00:26:41.687 01:55:50 event -- event/event.sh@49 -- # uname -s 00:26:41.946 01:55:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:26:41.946 01:55:50 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:26:41.946 01:55:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:41.946 01:55:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:41.946 01:55:50 event -- common/autotest_common.sh@10 -- # set +x 00:26:41.946 ************************************ 00:26:41.946 START TEST event_scheduler 00:26:41.946 ************************************ 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:26:41.946 * Looking for test storage... 00:26:41.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.946 01:55:50 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:41.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.946 --rc genhtml_branch_coverage=1 00:26:41.946 --rc genhtml_function_coverage=1 00:26:41.946 --rc genhtml_legend=1 00:26:41.946 --rc geninfo_all_blocks=1 00:26:41.946 --rc geninfo_unexecuted_blocks=1 00:26:41.946 00:26:41.946 ' 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:41.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.946 --rc genhtml_branch_coverage=1 00:26:41.946 --rc genhtml_function_coverage=1 00:26:41.946 --rc genhtml_legend=1 00:26:41.946 --rc geninfo_all_blocks=1 00:26:41.946 --rc geninfo_unexecuted_blocks=1 00:26:41.946 00:26:41.946 ' 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:41.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.946 --rc genhtml_branch_coverage=1 00:26:41.946 --rc genhtml_function_coverage=1 00:26:41.946 --rc genhtml_legend=1 00:26:41.946 --rc geninfo_all_blocks=1 00:26:41.946 --rc geninfo_unexecuted_blocks=1 00:26:41.946 00:26:41.946 ' 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:41.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.946 --rc genhtml_branch_coverage=1 00:26:41.946 --rc genhtml_function_coverage=1 00:26:41.946 --rc genhtml_legend=1 00:26:41.946 --rc geninfo_all_blocks=1 00:26:41.946 --rc geninfo_unexecuted_blocks=1 00:26:41.946 00:26:41.946 ' 00:26:41.946 01:55:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:26:41.946 01:55:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59706 00:26:41.946 01:55:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:26:41.946 01:55:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:26:41.946 01:55:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59706 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 59706 ']' 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:41.946 01:55:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:42.204 [2024-10-15 01:55:51.016275] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:42.204 [2024-10-15 01:55:51.016721] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59706 ] 00:26:42.204 [2024-10-15 01:55:51.197534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:42.768 [2024-10-15 01:55:51.487500] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.768 [2024-10-15 01:55:51.487660] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.768 [2024-10-15 01:55:51.487734] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.768 [2024-10-15 01:55:51.487752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.026 01:55:51 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:43.026 01:55:51 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:26:43.026 01:55:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:26:43.026 01:55:51 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.026 01:55:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:43.026 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:43.026 POWER: Cannot set governor of lcore 0 to userspace 00:26:43.026 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:43.026 POWER: Cannot set governor of lcore 0 to performance 00:26:43.026 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:43.026 POWER: Cannot set governor of lcore 0 to userspace 00:26:43.026 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:43.026 POWER: Cannot set governor of lcore 0 to userspace 00:26:43.026 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:26:43.026 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:26:43.026 POWER: Unable to set Power Management Environment for lcore 0 00:26:43.026 [2024-10-15 01:55:51.998715] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:26:43.026 [2024-10-15 01:55:51.998741] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:26:43.026 [2024-10-15 01:55:51.998759] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:26:43.026 [2024-10-15 01:55:51.998793] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:26:43.026 [2024-10-15 01:55:51.998807] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:26:43.026 [2024-10-15 01:55:51.998822] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:26:43.026 01:55:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.026 01:55:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:26:43.026 01:55:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.026 01:55:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:43.593 [2024-10-15 01:55:52.330992] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:26:43.593 01:55:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.593 01:55:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:26:43.593 01:55:52 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:43.593 01:55:52 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:43.593 01:55:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:43.593 ************************************ 00:26:43.594 START TEST scheduler_create_thread 00:26:43.594 ************************************ 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.594 2 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.594 3 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.594 4 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.594 5 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.594 6 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.594 7 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.594 8 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.594 9 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.594 10 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.594 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:44.160 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.160 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:26:44.160 01:55:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:26:44.160 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.160 01:55:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:45.095 01:55:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.095 01:55:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:26:45.095 01:55:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.095 01:55:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:46.105 01:55:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.105 01:55:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:26:46.105 01:55:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:26:46.105 01:55:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.105 01:55:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:46.672 ************************************ 00:26:46.672 END TEST scheduler_create_thread 00:26:46.672 ************************************ 00:26:46.672 01:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.672 00:26:46.672 real 0m3.228s 00:26:46.672 user 0m0.014s 00:26:46.672 sys 0m0.007s 00:26:46.672 01:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:46.672 01:55:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:46.672 01:55:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:46.672 01:55:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59706 00:26:46.672 01:55:55 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 59706 ']' 00:26:46.672 01:55:55 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 59706 00:26:46.672 01:55:55 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:26:46.672 01:55:55 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:46.672 01:55:55 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59706 00:26:46.672 killing process with pid 59706 00:26:46.672 01:55:55 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:46.672 01:55:55 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:46.672 01:55:55 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59706' 00:26:46.672 01:55:55 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 59706 00:26:46.672 01:55:55 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 59706 00:26:47.239 [2024-10-15 01:55:55.951757] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:26:48.613 00:26:48.613 real 0m6.545s 00:26:48.613 user 0m12.384s 00:26:48.613 sys 0m0.551s 00:26:48.613 01:55:57 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:48.613 ************************************ 00:26:48.613 END TEST event_scheduler 00:26:48.613 ************************************ 00:26:48.613 01:55:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 01:55:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:26:48.613 01:55:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:26:48.613 01:55:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:48.613 01:55:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:48.613 01:55:57 event -- common/autotest_common.sh@10 -- # set +x 00:26:48.613 ************************************ 00:26:48.613 START TEST app_repeat 00:26:48.613 ************************************ 00:26:48.613 01:55:57 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:26:48.613 01:55:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:48.613 01:55:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:48.613 01:55:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:26:48.613 01:55:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:48.613 01:55:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:26:48.613 01:55:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:26:48.613 01:55:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:26:48.613 Process app_repeat pid: 59823 00:26:48.613 spdk_app_start Round 0 00:26:48.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:48.613 01:55:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59823 00:26:48.613 01:55:57 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:26:48.613 01:55:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:26:48.613 01:55:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59823' 00:26:48.614 01:55:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:26:48.614 01:55:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:26:48.614 01:55:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59823 /var/tmp/spdk-nbd.sock 00:26:48.614 01:55:57 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59823 ']' 00:26:48.614 01:55:57 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:48.614 01:55:57 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:48.614 01:55:57 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:48.614 01:55:57 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:48.614 01:55:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:48.614 [2024-10-15 01:55:57.369151] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:26:48.614 [2024-10-15 01:55:57.369286] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59823 ] 00:26:48.614 [2024-10-15 01:55:57.535042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:48.871 [2024-10-15 01:55:57.793236] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.871 [2024-10-15 01:55:57.793241] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.805 01:55:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.805 01:55:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:26:49.805 01:55:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:49.805 Malloc0 00:26:49.805 01:55:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:50.432 Malloc1 00:26:50.432 01:55:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:26:50.432 /dev/nbd0 00:26:50.432 01:55:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:50.691 01:55:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:50.691 1+0 records in 00:26:50.691 1+0 records out 00:26:50.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265124 s, 15.4 MB/s 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:50.691 01:55:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:26:50.691 01:55:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:50.691 01:55:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:50.691 01:55:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:26:50.949 /dev/nbd1 00:26:50.949 01:55:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:50.949 01:55:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:50.949 1+0 records in 00:26:50.949 1+0 records out 00:26:50.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332535 s, 12.3 MB/s 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:50.949 01:55:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:26:50.949 01:55:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:50.949 01:55:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:50.950 01:55:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:50.950 01:55:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:50.950 01:55:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:51.208 01:56:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:51.208 { 00:26:51.208 "nbd_device": "/dev/nbd0", 00:26:51.208 "bdev_name": "Malloc0" 00:26:51.208 }, 00:26:51.208 { 00:26:51.208 "nbd_device": "/dev/nbd1", 00:26:51.208 "bdev_name": "Malloc1" 00:26:51.208 } 00:26:51.208 ]' 00:26:51.208 01:56:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:51.208 { 00:26:51.208 "nbd_device": "/dev/nbd0", 00:26:51.208 "bdev_name": "Malloc0" 00:26:51.208 }, 00:26:51.208 { 00:26:51.208 "nbd_device": "/dev/nbd1", 00:26:51.208 "bdev_name": "Malloc1" 00:26:51.208 } 00:26:51.208 ]' 00:26:51.208 01:56:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:51.208 01:56:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:51.208 /dev/nbd1' 00:26:51.208 01:56:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:51.208 /dev/nbd1' 00:26:51.208 01:56:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:26:51.209 256+0 records in 00:26:51.209 256+0 records out 00:26:51.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0083933 s, 125 MB/s 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:51.209 256+0 records in 00:26:51.209 256+0 records out 00:26:51.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316224 s, 33.2 MB/s 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:51.209 256+0 records in 00:26:51.209 256+0 records out 00:26:51.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0344504 s, 30.4 MB/s 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:51.209 01:56:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:51.776 01:56:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:51.776 01:56:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:51.776 01:56:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:51.776 01:56:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:51.776 01:56:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:51.776 01:56:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:51.776 01:56:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:51.776 01:56:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:51.776 01:56:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:51.776 01:56:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:52.035 01:56:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:52.035 01:56:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:52.035 01:56:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:52.035 01:56:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:52.035 01:56:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:52.035 01:56:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:52.035 01:56:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:52.035 01:56:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:52.035 01:56:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:52.035 01:56:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:52.035 01:56:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:52.317 01:56:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:26:52.317 01:56:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:26:52.882 01:56:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:26:54.256 [2024-10-15 01:56:02.940299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:54.256 [2024-10-15 01:56:03.183453] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.256 [2024-10-15 01:56:03.183462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.515 [2024-10-15 01:56:03.375984] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:26:54.515 [2024-10-15 01:56:03.376116] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:26:55.890 01:56:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:26:55.890 spdk_app_start Round 1 00:26:55.890 01:56:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:26:55.890 01:56:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59823 /var/tmp/spdk-nbd.sock 00:26:55.890 01:56:04 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59823 ']' 00:26:55.890 01:56:04 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:55.890 01:56:04 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:55.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:55.890 01:56:04 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:55.890 01:56:04 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:55.890 01:56:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:56.148 01:56:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:56.148 01:56:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:26:56.148 01:56:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:56.406 Malloc0 00:26:56.406 01:56:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:56.972 Malloc1 00:26:56.972 01:56:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:26:56.972 /dev/nbd0 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:56.972 01:56:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:57.230 1+0 records in 00:26:57.230 1+0 records out 00:26:57.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331384 s, 12.4 MB/s 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:26:57.230 01:56:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:57.230 01:56:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:57.230 01:56:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:26:57.230 01:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:57.230 01:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:57.230 01:56:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:26:57.487 /dev/nbd1 00:26:57.487 01:56:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:57.487 01:56:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:57.487 01:56:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:57.487 01:56:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:26:57.487 01:56:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:57.487 01:56:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:57.488 01:56:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:57.488 01:56:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:26:57.488 01:56:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:57.488 01:56:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:57.488 01:56:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:57.488 1+0 records in 00:26:57.488 1+0 records out 00:26:57.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324729 s, 12.6 MB/s 00:26:57.488 01:56:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:57.488 01:56:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:26:57.488 01:56:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:57.488 01:56:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:57.488 01:56:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:26:57.488 01:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:57.488 01:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:57.488 01:56:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:57.488 01:56:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:57.488 01:56:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:57.746 { 00:26:57.746 "nbd_device": "/dev/nbd0", 00:26:57.746 "bdev_name": "Malloc0" 00:26:57.746 }, 00:26:57.746 { 00:26:57.746 "nbd_device": "/dev/nbd1", 00:26:57.746 "bdev_name": "Malloc1" 00:26:57.746 } 00:26:57.746 ]' 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:57.746 { 00:26:57.746 "nbd_device": "/dev/nbd0", 00:26:57.746 "bdev_name": "Malloc0" 00:26:57.746 }, 00:26:57.746 { 00:26:57.746 "nbd_device": "/dev/nbd1", 00:26:57.746 "bdev_name": "Malloc1" 00:26:57.746 } 00:26:57.746 ]' 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:57.746 /dev/nbd1' 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:57.746 /dev/nbd1' 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:26:57.746 256+0 records in 00:26:57.746 256+0 records out 00:26:57.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00770536 s, 136 MB/s 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:57.746 01:56:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:57.746 256+0 records in 00:26:57.746 256+0 records out 00:26:57.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272478 s, 38.5 MB/s 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:57.747 256+0 records in 00:26:57.747 256+0 records out 00:26:57.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.035993 s, 29.1 MB/s 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.747 01:56:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:58.313 01:56:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:58.313 01:56:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:58.313 01:56:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:58.313 01:56:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:58.313 01:56:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:58.313 01:56:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:58.313 01:56:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:58.313 01:56:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:58.313 01:56:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:58.313 01:56:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:58.571 01:56:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:58.571 01:56:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:58.571 01:56:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:58.571 01:56:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:58.571 01:56:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:58.571 01:56:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:58.571 01:56:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:58.571 01:56:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:58.571 01:56:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:58.571 01:56:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:58.571 01:56:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:58.830 01:56:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:26:58.830 01:56:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:26:59.398 01:56:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:27:00.772 [2024-10-15 01:56:09.456313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:00.772 [2024-10-15 01:56:09.690890] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.772 [2024-10-15 01:56:09.690891] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.030 [2024-10-15 01:56:09.881971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:27:01.030 [2024-10-15 01:56:09.882039] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:27:02.402 spdk_app_start Round 2 00:27:02.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:02.402 01:56:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:27:02.402 01:56:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:27:02.402 01:56:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59823 /var/tmp/spdk-nbd.sock 00:27:02.402 01:56:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59823 ']' 00:27:02.402 01:56:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:02.402 01:56:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.402 01:56:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:02.402 01:56:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.402 01:56:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:02.661 01:56:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:02.661 01:56:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:27:02.661 01:56:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:02.919 Malloc0 00:27:02.919 01:56:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:03.177 Malloc1 00:27:03.177 01:56:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:03.177 01:56:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:27:03.812 /dev/nbd0 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:03.812 1+0 records in 00:27:03.812 1+0 records out 00:27:03.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321606 s, 12.7 MB/s 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:27:03.812 /dev/nbd1 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:03.812 1+0 records in 00:27:03.812 1+0 records out 00:27:03.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359426 s, 11.4 MB/s 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:03.812 01:56:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:03.812 01:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:04.380 { 00:27:04.380 "nbd_device": "/dev/nbd0", 00:27:04.380 "bdev_name": "Malloc0" 00:27:04.380 }, 00:27:04.380 { 00:27:04.380 "nbd_device": "/dev/nbd1", 00:27:04.380 "bdev_name": "Malloc1" 00:27:04.380 } 00:27:04.380 ]' 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:04.380 { 00:27:04.380 "nbd_device": "/dev/nbd0", 00:27:04.380 "bdev_name": "Malloc0" 00:27:04.380 }, 00:27:04.380 { 00:27:04.380 "nbd_device": "/dev/nbd1", 00:27:04.380 "bdev_name": "Malloc1" 00:27:04.380 } 00:27:04.380 ]' 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:04.380 /dev/nbd1' 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:04.380 /dev/nbd1' 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:27:04.380 256+0 records in 00:27:04.380 256+0 records out 00:27:04.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619636 s, 169 MB/s 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:04.380 256+0 records in 00:27:04.380 256+0 records out 00:27:04.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243042 s, 43.1 MB/s 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:04.380 256+0 records in 00:27:04.380 256+0 records out 00:27:04.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336166 s, 31.2 MB/s 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:04.380 01:56:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:27:04.381 01:56:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:04.381 01:56:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:27:04.381 01:56:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:04.381 01:56:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:04.381 01:56:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:04.381 01:56:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:04.381 01:56:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:04.381 01:56:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:27:04.381 01:56:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:04.381 01:56:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:04.639 01:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:04.639 01:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:04.639 01:56:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:04.639 01:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:04.639 01:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:04.639 01:56:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:04.639 01:56:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:04.639 01:56:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:04.639 01:56:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:04.639 01:56:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:04.897 01:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:04.897 01:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:04.897 01:56:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:04.897 01:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:04.897 01:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:04.897 01:56:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:04.897 01:56:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:04.897 01:56:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:04.897 01:56:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:04.897 01:56:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:04.897 01:56:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:05.154 01:56:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:05.154 01:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:05.154 01:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:05.412 01:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:05.412 01:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:27:05.412 01:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:05.412 01:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:27:05.412 01:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:27:05.412 01:56:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:27:05.412 01:56:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:27:05.412 01:56:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:05.412 01:56:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:27:05.412 01:56:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:27:05.671 01:56:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:27:07.045 [2024-10-15 01:56:15.910341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:07.302 [2024-10-15 01:56:16.141465] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.302 [2024-10-15 01:56:16.141473] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.560 [2024-10-15 01:56:16.331067] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:27:07.560 [2024-10-15 01:56:16.331144] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:27:08.935 01:56:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59823 /var/tmp/spdk-nbd.sock 00:27:08.935 01:56:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59823 ']' 00:27:08.935 01:56:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:08.935 01:56:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:08.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:08.935 01:56:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:08.935 01:56:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:08.935 01:56:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:09.192 01:56:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:09.192 01:56:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:27:09.192 01:56:17 event.app_repeat -- event/event.sh@39 -- # killprocess 59823 00:27:09.192 01:56:17 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 59823 ']' 00:27:09.192 01:56:17 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 59823 00:27:09.192 01:56:17 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:27:09.192 01:56:17 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.192 01:56:17 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59823 00:27:09.192 01:56:18 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:09.192 killing process with pid 59823 00:27:09.192 01:56:18 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:09.192 01:56:18 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59823' 00:27:09.192 01:56:18 event.app_repeat -- common/autotest_common.sh@969 -- # kill 59823 00:27:09.192 01:56:18 event.app_repeat -- common/autotest_common.sh@974 -- # wait 59823 00:27:10.125 spdk_app_start is called in Round 0. 00:27:10.125 Shutdown signal received, stop current app iteration 00:27:10.125 Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 reinitialization... 00:27:10.125 spdk_app_start is called in Round 1. 00:27:10.125 Shutdown signal received, stop current app iteration 00:27:10.125 Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 reinitialization... 00:27:10.125 spdk_app_start is called in Round 2. 00:27:10.125 Shutdown signal received, stop current app iteration 00:27:10.125 Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 reinitialization... 00:27:10.125 spdk_app_start is called in Round 3. 00:27:10.125 Shutdown signal received, stop current app iteration 00:27:10.383 01:56:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:27:10.383 01:56:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:27:10.383 00:27:10.383 real 0m21.842s 00:27:10.383 user 0m47.216s 00:27:10.383 sys 0m3.083s 00:27:10.383 01:56:19 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:10.383 ************************************ 00:27:10.383 END TEST app_repeat 00:27:10.383 ************************************ 00:27:10.383 01:56:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:10.383 01:56:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:27:10.383 01:56:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:27:10.383 01:56:19 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:10.384 01:56:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:10.384 01:56:19 event -- common/autotest_common.sh@10 -- # set +x 00:27:10.384 ************************************ 00:27:10.384 START TEST cpu_locks 00:27:10.384 ************************************ 00:27:10.384 01:56:19 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:27:10.384 * Looking for test storage... 00:27:10.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:27:10.384 01:56:19 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:10.384 01:56:19 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:27:10.384 01:56:19 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:10.384 01:56:19 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.384 01:56:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:27:10.642 01:56:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.642 01:56:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.642 01:56:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.642 01:56:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:27:10.642 01:56:19 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.642 01:56:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:10.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.642 --rc genhtml_branch_coverage=1 00:27:10.642 --rc genhtml_function_coverage=1 00:27:10.642 --rc genhtml_legend=1 00:27:10.642 --rc geninfo_all_blocks=1 00:27:10.642 --rc geninfo_unexecuted_blocks=1 00:27:10.642 00:27:10.642 ' 00:27:10.642 01:56:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:10.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.642 --rc genhtml_branch_coverage=1 00:27:10.642 --rc genhtml_function_coverage=1 00:27:10.642 --rc genhtml_legend=1 00:27:10.642 --rc geninfo_all_blocks=1 00:27:10.642 --rc geninfo_unexecuted_blocks=1 00:27:10.642 00:27:10.642 ' 00:27:10.642 01:56:19 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:10.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.642 --rc genhtml_branch_coverage=1 00:27:10.642 --rc genhtml_function_coverage=1 00:27:10.642 --rc genhtml_legend=1 00:27:10.642 --rc geninfo_all_blocks=1 00:27:10.642 --rc geninfo_unexecuted_blocks=1 00:27:10.642 00:27:10.642 ' 00:27:10.642 01:56:19 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:10.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.642 --rc genhtml_branch_coverage=1 00:27:10.642 --rc genhtml_function_coverage=1 00:27:10.642 --rc genhtml_legend=1 00:27:10.642 --rc geninfo_all_blocks=1 00:27:10.642 --rc geninfo_unexecuted_blocks=1 00:27:10.642 00:27:10.642 ' 00:27:10.642 01:56:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:27:10.642 01:56:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:27:10.642 01:56:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:27:10.642 01:56:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:27:10.642 01:56:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:10.642 01:56:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:10.642 01:56:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:10.642 ************************************ 00:27:10.642 START TEST default_locks 00:27:10.642 ************************************ 00:27:10.642 01:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:27:10.642 01:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60305 00:27:10.642 01:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60305 00:27:10.642 01:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60305 ']' 00:27:10.642 01:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:10.642 01:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.642 01:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.642 01:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.642 01:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.642 01:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:10.643 [2024-10-15 01:56:19.514664] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:10.643 [2024-10-15 01:56:19.514820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60305 ] 00:27:10.900 [2024-10-15 01:56:19.682742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.157 [2024-10-15 01:56:19.970232] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.096 01:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:12.096 01:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:27:12.096 01:56:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60305 00:27:12.096 01:56:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60305 00:27:12.096 01:56:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:12.354 01:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60305 00:27:12.354 01:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60305 ']' 00:27:12.354 01:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60305 00:27:12.354 01:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:27:12.354 01:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:12.354 01:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60305 00:27:12.613 killing process with pid 60305 00:27:12.613 01:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:12.613 01:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:12.613 01:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60305' 00:27:12.613 01:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60305 00:27:12.613 01:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60305 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60305 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60305 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60305 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60305 ']' 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:15.146 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60305) - No such process 00:27:15.146 ERROR: process (pid: 60305) is no longer running 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:27:15.146 00:27:15.146 real 0m4.252s 00:27:15.146 user 0m4.291s 00:27:15.146 sys 0m0.813s 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:15.146 ************************************ 00:27:15.146 END TEST default_locks 00:27:15.146 01:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:15.146 ************************************ 00:27:15.146 01:56:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:27:15.146 01:56:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:15.146 01:56:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:15.146 01:56:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:15.146 ************************************ 00:27:15.146 START TEST default_locks_via_rpc 00:27:15.146 ************************************ 00:27:15.146 01:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:27:15.146 01:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60382 00:27:15.146 01:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:15.146 01:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60382 00:27:15.146 01:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60382 ']' 00:27:15.146 01:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.146 01:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.146 01:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.146 01:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.146 01:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:15.146 [2024-10-15 01:56:23.841399] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:15.146 [2024-10-15 01:56:23.841626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60382 ] 00:27:15.146 [2024-10-15 01:56:24.013677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.405 [2024-10-15 01:56:24.253719] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60382 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60382 00:27:16.356 01:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:16.615 01:56:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60382 00:27:16.615 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60382 ']' 00:27:16.872 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60382 00:27:16.872 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:27:16.872 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:16.872 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60382 00:27:16.872 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:16.872 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:16.872 killing process with pid 60382 00:27:16.872 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60382' 00:27:16.872 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60382 00:27:16.873 01:56:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60382 00:27:19.406 00:27:19.406 real 0m4.264s 00:27:19.406 user 0m4.280s 00:27:19.406 sys 0m0.774s 00:27:19.406 01:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:19.406 01:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:19.406 ************************************ 00:27:19.406 END TEST default_locks_via_rpc 00:27:19.406 ************************************ 00:27:19.406 01:56:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:27:19.406 01:56:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:19.406 01:56:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:19.406 01:56:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:19.406 ************************************ 00:27:19.406 START TEST non_locking_app_on_locked_coremask 00:27:19.406 ************************************ 00:27:19.406 01:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:27:19.406 01:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60457 00:27:19.406 01:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60457 /var/tmp/spdk.sock 00:27:19.406 01:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:19.406 01:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60457 ']' 00:27:19.406 01:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.406 01:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:19.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.406 01:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.406 01:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:19.406 01:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:19.406 [2024-10-15 01:56:28.158488] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:19.406 [2024-10-15 01:56:28.158713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60457 ] 00:27:19.406 [2024-10-15 01:56:28.329381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.665 [2024-10-15 01:56:28.566631] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.640 01:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:20.640 01:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:20.640 01:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60473 00:27:20.640 01:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:27:20.640 01:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60473 /var/tmp/spdk2.sock 00:27:20.640 01:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60473 ']' 00:27:20.640 01:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:20.640 01:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:20.640 01:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:20.640 01:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.640 01:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:20.640 [2024-10-15 01:56:29.585246] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:20.640 [2024-10-15 01:56:29.585393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60473 ] 00:27:20.899 [2024-10-15 01:56:29.764144] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:20.899 [2024-10-15 01:56:29.764216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.465 [2024-10-15 01:56:30.280781] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.995 01:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:23.995 01:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:23.995 01:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60457 00:27:23.995 01:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60457 00:27:23.995 01:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:24.561 01:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60457 00:27:24.561 01:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60457 ']' 00:27:24.561 01:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60457 00:27:24.561 01:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:27:24.561 01:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:24.561 01:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60457 00:27:24.561 01:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:24.561 01:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:24.561 killing process with pid 60457 00:27:24.561 01:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60457' 00:27:24.561 01:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60457 00:27:24.561 01:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60457 00:27:29.826 01:56:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60473 00:27:29.826 01:56:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60473 ']' 00:27:29.826 01:56:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60473 00:27:29.826 01:56:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:27:29.826 01:56:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:29.826 01:56:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60473 00:27:29.826 01:56:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:29.826 01:56:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:29.826 killing process with pid 60473 00:27:29.826 01:56:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60473' 00:27:29.826 01:56:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60473 00:27:29.826 01:56:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60473 00:27:31.727 00:27:31.727 real 0m12.499s 00:27:31.727 user 0m13.064s 00:27:31.727 sys 0m1.618s 00:27:31.727 01:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:31.727 01:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:31.727 ************************************ 00:27:31.727 END TEST non_locking_app_on_locked_coremask 00:27:31.727 ************************************ 00:27:31.727 01:56:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:27:31.727 01:56:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:31.727 01:56:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.727 01:56:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:31.727 ************************************ 00:27:31.727 START TEST locking_app_on_unlocked_coremask 00:27:31.727 ************************************ 00:27:31.727 01:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:27:31.727 01:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60634 00:27:31.727 01:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60634 /var/tmp/spdk.sock 00:27:31.727 01:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:27:31.727 01:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60634 ']' 00:27:31.728 01:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.728 01:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:31.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.728 01:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.728 01:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:31.728 01:56:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:31.728 [2024-10-15 01:56:40.688443] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:31.728 [2024-10-15 01:56:40.688607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60634 ] 00:27:31.985 [2024-10-15 01:56:40.853348] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:31.985 [2024-10-15 01:56:40.853425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.243 [2024-10-15 01:56:41.091106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.179 01:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.179 01:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:33.179 01:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60656 00:27:33.179 01:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:27:33.179 01:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60656 /var/tmp/spdk2.sock 00:27:33.179 01:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60656 ']' 00:27:33.179 01:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:33.179 01:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:33.179 01:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:33.179 01:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.179 01:56:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:33.179 [2024-10-15 01:56:42.080031] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:33.179 [2024-10-15 01:56:42.080253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60656 ] 00:27:33.437 [2024-10-15 01:56:42.269593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.004 [2024-10-15 01:56:42.746331] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.930 01:56:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:35.930 01:56:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:35.930 01:56:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60656 00:27:35.930 01:56:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60656 00:27:35.930 01:56:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:36.865 01:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60634 00:27:36.865 01:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60634 ']' 00:27:36.865 01:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60634 00:27:36.865 01:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:27:36.865 01:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:36.865 01:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60634 00:27:36.865 01:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:36.865 01:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:36.865 killing process with pid 60634 00:27:36.865 01:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60634' 00:27:36.865 01:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60634 00:27:36.865 01:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60634 00:27:42.192 01:56:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60656 00:27:42.192 01:56:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60656 ']' 00:27:42.193 01:56:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60656 00:27:42.193 01:56:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:27:42.193 01:56:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:42.193 01:56:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60656 00:27:42.193 01:56:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:42.193 01:56:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:42.193 killing process with pid 60656 00:27:42.193 01:56:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60656' 00:27:42.193 01:56:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60656 00:27:42.193 01:56:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60656 00:27:44.095 00:27:44.095 real 0m12.438s 00:27:44.095 user 0m12.854s 00:27:44.095 sys 0m1.615s 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.095 ************************************ 00:27:44.095 END TEST locking_app_on_unlocked_coremask 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:44.095 ************************************ 00:27:44.095 01:56:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:27:44.095 01:56:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:44.095 01:56:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.095 01:56:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:44.095 ************************************ 00:27:44.095 START TEST locking_app_on_locked_coremask 00:27:44.095 ************************************ 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60811 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60811 /var/tmp/spdk.sock 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60811 ']' 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:44.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:44.095 01:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:44.380 [2024-10-15 01:56:53.185218] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:44.380 [2024-10-15 01:56:53.185501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60811 ] 00:27:44.380 [2024-10-15 01:56:53.352505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.638 [2024-10-15 01:56:53.588824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60833 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60833 /var/tmp/spdk2.sock 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60833 /var/tmp/spdk2.sock 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60833 /var/tmp/spdk2.sock 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60833 ']' 00:27:45.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:45.573 01:56:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:45.573 [2024-10-15 01:56:54.548230] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:45.573 [2024-10-15 01:56:54.548605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60833 ] 00:27:45.832 [2024-10-15 01:56:54.722782] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60811 has claimed it. 00:27:45.832 [2024-10-15 01:56:54.722867] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:27:46.398 ERROR: process (pid: 60833) is no longer running 00:27:46.398 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60833) - No such process 00:27:46.398 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:46.398 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:27:46.398 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:27:46.398 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:46.398 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:46.398 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:46.398 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60811 00:27:46.398 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60811 00:27:46.398 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:46.657 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60811 00:27:46.657 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60811 ']' 00:27:46.657 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60811 00:27:46.657 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:27:46.657 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:46.657 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60811 00:27:46.916 killing process with pid 60811 00:27:46.916 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:46.916 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:46.916 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60811' 00:27:46.916 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60811 00:27:46.916 01:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60811 00:27:49.449 00:27:49.449 real 0m4.985s 00:27:49.449 user 0m5.347s 00:27:49.449 sys 0m0.838s 00:27:49.449 01:56:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:49.449 ************************************ 00:27:49.449 END TEST locking_app_on_locked_coremask 00:27:49.449 01:56:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:49.449 ************************************ 00:27:49.449 01:56:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:27:49.449 01:56:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:49.449 01:56:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:49.449 01:56:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:49.449 ************************************ 00:27:49.449 START TEST locking_overlapped_coremask 00:27:49.449 ************************************ 00:27:49.449 01:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:27:49.449 01:56:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60899 00:27:49.449 01:56:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60899 /var/tmp/spdk.sock 00:27:49.449 01:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60899 ']' 00:27:49.449 01:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.449 01:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:49.449 01:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.449 01:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:49.449 01:56:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:49.449 01:56:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:27:49.449 [2024-10-15 01:56:58.240619] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:49.449 [2024-10-15 01:56:58.241803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60899 ] 00:27:49.449 [2024-10-15 01:56:58.424228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:49.708 [2024-10-15 01:56:58.666501] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.708 [2024-10-15 01:56:58.666909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.708 [2024-10-15 01:56:58.666942] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60926 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60926 /var/tmp/spdk2.sock 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60926 /var/tmp/spdk2.sock 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60926 /var/tmp/spdk2.sock 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60926 ']' 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:50.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:50.641 01:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:50.899 [2024-10-15 01:56:59.654862] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:50.899 [2024-10-15 01:56:59.655327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60926 ] 00:27:50.899 [2024-10-15 01:56:59.838152] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60899 has claimed it. 00:27:50.899 [2024-10-15 01:56:59.838237] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:27:51.466 ERROR: process (pid: 60926) is no longer running 00:27:51.466 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60926) - No such process 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60899 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60899 ']' 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60899 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60899 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60899' 00:27:51.466 killing process with pid 60899 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60899 00:27:51.466 01:57:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60899 00:27:54.010 00:27:54.010 real 0m4.604s 00:27:54.010 user 0m12.033s 00:27:54.010 sys 0m0.697s 00:27:54.010 01:57:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:54.010 01:57:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:54.010 ************************************ 00:27:54.010 END TEST locking_overlapped_coremask 00:27:54.010 ************************************ 00:27:54.011 01:57:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:27:54.011 01:57:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:54.011 01:57:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:54.011 01:57:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:54.011 ************************************ 00:27:54.011 START TEST locking_overlapped_coremask_via_rpc 00:27:54.011 ************************************ 00:27:54.011 01:57:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:27:54.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.011 01:57:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60990 00:27:54.011 01:57:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60990 /var/tmp/spdk.sock 00:27:54.011 01:57:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:27:54.011 01:57:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60990 ']' 00:27:54.011 01:57:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.011 01:57:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:54.011 01:57:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.011 01:57:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:54.011 01:57:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:54.011 [2024-10-15 01:57:02.900083] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:54.011 [2024-10-15 01:57:02.900268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60990 ] 00:27:54.269 [2024-10-15 01:57:03.077033] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:54.269 [2024-10-15 01:57:03.077108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:54.527 [2024-10-15 01:57:03.327330] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.527 [2024-10-15 01:57:03.327499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.527 [2024-10-15 01:57:03.327592] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.461 01:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:55.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:55.461 01:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:27:55.461 01:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61008 00:27:55.461 01:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61008 /var/tmp/spdk2.sock 00:27:55.461 01:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:27:55.461 01:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61008 ']' 00:27:55.461 01:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:55.461 01:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:55.461 01:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:55.461 01:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:55.461 01:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:55.461 [2024-10-15 01:57:04.323915] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:27:55.461 [2024-10-15 01:57:04.324364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61008 ] 00:27:55.719 [2024-10-15 01:57:04.511274] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:55.719 [2024-10-15 01:57:04.511336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:56.286 [2024-10-15 01:57:05.009390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.286 [2024-10-15 01:57:05.009474] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.286 [2024-10-15 01:57:05.009499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:58.187 [2024-10-15 01:57:07.150621] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60990 has claimed it. 00:27:58.187 request: 00:27:58.187 { 00:27:58.187 "method": "framework_enable_cpumask_locks", 00:27:58.187 "req_id": 1 00:27:58.187 } 00:27:58.187 Got JSON-RPC error response 00:27:58.187 response: 00:27:58.187 { 00:27:58.187 "code": -32603, 00:27:58.187 "message": "Failed to claim CPU core: 2" 00:27:58.187 } 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60990 /var/tmp/spdk.sock 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60990 ']' 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:58.187 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:58.446 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:58.446 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:27:58.446 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61008 /var/tmp/spdk2.sock 00:27:58.446 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61008 ']' 00:27:58.446 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:58.446 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:58.446 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:58.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:58.446 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:58.446 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:58.704 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:58.704 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:27:58.704 ************************************ 00:27:58.704 END TEST locking_overlapped_coremask_via_rpc 00:27:58.704 ************************************ 00:27:58.704 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:27:58.704 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:27:58.704 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:27:58.704 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:27:58.704 00:27:58.704 real 0m4.919s 00:27:58.704 user 0m1.773s 00:27:58.704 sys 0m0.215s 00:27:58.704 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:58.704 01:57:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:58.962 01:57:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:27:58.962 01:57:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60990 ]] 00:27:58.962 01:57:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60990 00:27:58.962 01:57:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60990 ']' 00:27:58.962 01:57:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60990 00:27:58.962 01:57:07 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:27:58.962 01:57:07 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.962 01:57:07 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60990 00:27:58.962 killing process with pid 60990 00:27:58.962 01:57:07 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:58.962 01:57:07 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:58.962 01:57:07 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60990' 00:27:58.962 01:57:07 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60990 00:27:58.962 01:57:07 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60990 00:28:01.491 01:57:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61008 ]] 00:28:01.492 01:57:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61008 00:28:01.492 01:57:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61008 ']' 00:28:01.492 01:57:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61008 00:28:01.492 01:57:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:28:01.492 01:57:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:01.492 01:57:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61008 00:28:01.492 01:57:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:01.492 01:57:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:01.492 01:57:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61008' 00:28:01.492 killing process with pid 61008 00:28:01.492 01:57:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61008 00:28:01.492 01:57:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61008 00:28:04.020 01:57:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:04.020 01:57:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:28:04.020 01:57:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60990 ]] 00:28:04.020 01:57:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60990 00:28:04.020 01:57:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60990 ']' 00:28:04.020 01:57:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60990 00:28:04.020 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60990) - No such process 00:28:04.020 Process with pid 60990 is not found 00:28:04.020 Process with pid 61008 is not found 00:28:04.020 01:57:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60990 is not found' 00:28:04.020 01:57:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61008 ]] 00:28:04.020 01:57:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61008 00:28:04.020 01:57:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61008 ']' 00:28:04.020 01:57:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61008 00:28:04.020 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61008) - No such process 00:28:04.020 01:57:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61008 is not found' 00:28:04.020 01:57:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:04.020 ************************************ 00:28:04.020 END TEST cpu_locks 00:28:04.020 ************************************ 00:28:04.020 00:28:04.020 real 0m53.325s 00:28:04.020 user 1m29.795s 00:28:04.020 sys 0m7.849s 00:28:04.020 01:57:12 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:04.020 01:57:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:04.020 ************************************ 00:28:04.020 END TEST event 00:28:04.020 ************************************ 00:28:04.020 00:28:04.020 real 1m27.892s 00:28:04.020 user 2m37.564s 00:28:04.020 sys 0m12.120s 00:28:04.020 01:57:12 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:04.020 01:57:12 event -- common/autotest_common.sh@10 -- # set +x 00:28:04.020 01:57:12 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:04.020 01:57:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:04.020 01:57:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:04.020 01:57:12 -- common/autotest_common.sh@10 -- # set +x 00:28:04.020 ************************************ 00:28:04.020 START TEST thread 00:28:04.020 ************************************ 00:28:04.020 01:57:12 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:04.020 * Looking for test storage... 00:28:04.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:28:04.020 01:57:12 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:04.020 01:57:12 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:28:04.020 01:57:12 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:04.020 01:57:12 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:04.020 01:57:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.020 01:57:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.020 01:57:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.020 01:57:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.020 01:57:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.020 01:57:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.020 01:57:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.020 01:57:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.020 01:57:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.020 01:57:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.020 01:57:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.020 01:57:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:28:04.020 01:57:12 thread -- scripts/common.sh@345 -- # : 1 00:28:04.020 01:57:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.020 01:57:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.020 01:57:12 thread -- scripts/common.sh@365 -- # decimal 1 00:28:04.020 01:57:12 thread -- scripts/common.sh@353 -- # local d=1 00:28:04.020 01:57:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.020 01:57:12 thread -- scripts/common.sh@355 -- # echo 1 00:28:04.020 01:57:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.020 01:57:12 thread -- scripts/common.sh@366 -- # decimal 2 00:28:04.020 01:57:12 thread -- scripts/common.sh@353 -- # local d=2 00:28:04.020 01:57:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.020 01:57:12 thread -- scripts/common.sh@355 -- # echo 2 00:28:04.020 01:57:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.020 01:57:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.020 01:57:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.020 01:57:12 thread -- scripts/common.sh@368 -- # return 0 00:28:04.020 01:57:12 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.020 01:57:12 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:04.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.020 --rc genhtml_branch_coverage=1 00:28:04.020 --rc genhtml_function_coverage=1 00:28:04.020 --rc genhtml_legend=1 00:28:04.021 --rc geninfo_all_blocks=1 00:28:04.021 --rc geninfo_unexecuted_blocks=1 00:28:04.021 00:28:04.021 ' 00:28:04.021 01:57:12 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:04.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.021 --rc genhtml_branch_coverage=1 00:28:04.021 --rc genhtml_function_coverage=1 00:28:04.021 --rc genhtml_legend=1 00:28:04.021 --rc geninfo_all_blocks=1 00:28:04.021 --rc geninfo_unexecuted_blocks=1 00:28:04.021 00:28:04.021 ' 00:28:04.021 01:57:12 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:04.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.021 --rc genhtml_branch_coverage=1 00:28:04.021 --rc genhtml_function_coverage=1 00:28:04.021 --rc genhtml_legend=1 00:28:04.021 --rc geninfo_all_blocks=1 00:28:04.021 --rc geninfo_unexecuted_blocks=1 00:28:04.021 00:28:04.021 ' 00:28:04.021 01:57:12 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:04.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.021 --rc genhtml_branch_coverage=1 00:28:04.021 --rc genhtml_function_coverage=1 00:28:04.021 --rc genhtml_legend=1 00:28:04.021 --rc geninfo_all_blocks=1 00:28:04.021 --rc geninfo_unexecuted_blocks=1 00:28:04.021 00:28:04.021 ' 00:28:04.021 01:57:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:04.021 01:57:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:28:04.021 01:57:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:04.021 01:57:12 thread -- common/autotest_common.sh@10 -- # set +x 00:28:04.021 ************************************ 00:28:04.021 START TEST thread_poller_perf 00:28:04.021 ************************************ 00:28:04.021 01:57:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:04.021 [2024-10-15 01:57:12.842318] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:04.021 [2024-10-15 01:57:12.845813] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61209 ] 00:28:04.021 [2024-10-15 01:57:13.024490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.279 [2024-10-15 01:57:13.280750] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.279 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:28:06.182 [2024-10-15T01:57:15.194Z] ====================================== 00:28:06.182 [2024-10-15T01:57:15.194Z] busy:2209664233 (cyc) 00:28:06.182 [2024-10-15T01:57:15.194Z] total_run_count: 302000 00:28:06.182 [2024-10-15T01:57:15.194Z] tsc_hz: 2200000000 (cyc) 00:28:06.182 [2024-10-15T01:57:15.194Z] ====================================== 00:28:06.182 [2024-10-15T01:57:15.194Z] poller_cost: 7316 (cyc), 3325 (nsec) 00:28:06.182 00:28:06.182 real 0m1.890s 00:28:06.182 user 0m1.641s 00:28:06.182 sys 0m0.129s 00:28:06.182 01:57:14 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.182 01:57:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:28:06.182 ************************************ 00:28:06.182 END TEST thread_poller_perf 00:28:06.182 ************************************ 00:28:06.182 01:57:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:28:06.182 01:57:14 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:28:06.182 01:57:14 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.182 01:57:14 thread -- common/autotest_common.sh@10 -- # set +x 00:28:06.182 ************************************ 00:28:06.182 START TEST thread_poller_perf 00:28:06.182 ************************************ 00:28:06.182 01:57:14 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:28:06.182 [2024-10-15 01:57:14.787802] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:06.182 [2024-10-15 01:57:14.787943] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61251 ] 00:28:06.182 [2024-10-15 01:57:14.957909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.440 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:28:06.440 [2024-10-15 01:57:15.234812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.815 [2024-10-15T01:57:16.827Z] ====================================== 00:28:07.815 [2024-10-15T01:57:16.827Z] busy:2203897891 (cyc) 00:28:07.815 [2024-10-15T01:57:16.827Z] total_run_count: 3816000 00:28:07.815 [2024-10-15T01:57:16.827Z] tsc_hz: 2200000000 (cyc) 00:28:07.815 [2024-10-15T01:57:16.827Z] ====================================== 00:28:07.815 [2024-10-15T01:57:16.827Z] poller_cost: 577 (cyc), 262 (nsec) 00:28:07.815 00:28:07.815 real 0m1.883s 00:28:07.815 user 0m1.662s 00:28:07.815 sys 0m0.111s 00:28:07.815 01:57:16 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:07.815 ************************************ 00:28:07.815 END TEST thread_poller_perf 00:28:07.815 ************************************ 00:28:07.815 01:57:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:28:07.815 01:57:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:28:07.815 ************************************ 00:28:07.815 END TEST thread 00:28:07.815 ************************************ 00:28:07.815 00:28:07.815 real 0m4.050s 00:28:07.815 user 0m3.431s 00:28:07.815 sys 0m0.381s 00:28:07.815 01:57:16 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:07.815 01:57:16 thread -- common/autotest_common.sh@10 -- # set +x 00:28:07.815 01:57:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:28:07.815 01:57:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:28:07.815 01:57:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:07.815 01:57:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:07.815 01:57:16 -- common/autotest_common.sh@10 -- # set +x 00:28:07.815 ************************************ 00:28:07.815 START TEST app_cmdline 00:28:07.815 ************************************ 00:28:07.815 01:57:16 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:28:07.815 * Looking for test storage... 00:28:07.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:28:07.815 01:57:16 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:07.815 01:57:16 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:28:07.815 01:57:16 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@345 -- # : 1 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:08.073 01:57:16 app_cmdline -- scripts/common.sh@368 -- # return 0 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:08.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.073 --rc genhtml_branch_coverage=1 00:28:08.073 --rc genhtml_function_coverage=1 00:28:08.073 --rc genhtml_legend=1 00:28:08.073 --rc geninfo_all_blocks=1 00:28:08.073 --rc geninfo_unexecuted_blocks=1 00:28:08.073 00:28:08.073 ' 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:08.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.073 --rc genhtml_branch_coverage=1 00:28:08.073 --rc genhtml_function_coverage=1 00:28:08.073 --rc genhtml_legend=1 00:28:08.073 --rc geninfo_all_blocks=1 00:28:08.073 --rc geninfo_unexecuted_blocks=1 00:28:08.073 00:28:08.073 ' 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:08.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.073 --rc genhtml_branch_coverage=1 00:28:08.073 --rc genhtml_function_coverage=1 00:28:08.073 --rc genhtml_legend=1 00:28:08.073 --rc geninfo_all_blocks=1 00:28:08.073 --rc geninfo_unexecuted_blocks=1 00:28:08.073 00:28:08.073 ' 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:08.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.073 --rc genhtml_branch_coverage=1 00:28:08.073 --rc genhtml_function_coverage=1 00:28:08.073 --rc genhtml_legend=1 00:28:08.073 --rc geninfo_all_blocks=1 00:28:08.073 --rc geninfo_unexecuted_blocks=1 00:28:08.073 00:28:08.073 ' 00:28:08.073 01:57:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:28:08.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.073 01:57:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61340 00:28:08.073 01:57:16 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:28:08.073 01:57:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61340 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61340 ']' 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:08.073 01:57:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:08.073 [2024-10-15 01:57:17.018612] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:08.073 [2024-10-15 01:57:17.019876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61340 ] 00:28:08.332 [2024-10-15 01:57:17.198382] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.590 [2024-10-15 01:57:17.435839] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.524 01:57:18 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.524 01:57:18 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:28:09.524 01:57:18 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:28:09.782 { 00:28:09.782 "version": "SPDK v25.01-pre git sha1 d056e7588", 00:28:09.782 "fields": { 00:28:09.782 "major": 25, 00:28:09.782 "minor": 1, 00:28:09.782 "patch": 0, 00:28:09.782 "suffix": "-pre", 00:28:09.782 "commit": "d056e7588" 00:28:09.782 } 00:28:09.782 } 00:28:09.782 01:57:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:28:09.782 01:57:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:28:09.782 01:57:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:28:09.782 01:57:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:28:09.782 01:57:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:28:09.782 01:57:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:09.782 01:57:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.782 01:57:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:28:09.782 01:57:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:28:09.782 01:57:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:09.782 01:57:18 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:10.040 request: 00:28:10.040 { 00:28:10.040 "method": "env_dpdk_get_mem_stats", 00:28:10.040 "req_id": 1 00:28:10.040 } 00:28:10.040 Got JSON-RPC error response 00:28:10.040 response: 00:28:10.040 { 00:28:10.040 "code": -32601, 00:28:10.040 "message": "Method not found" 00:28:10.040 } 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:10.040 01:57:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61340 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61340 ']' 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61340 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61340 00:28:10.040 killing process with pid 61340 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61340' 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@969 -- # kill 61340 00:28:10.040 01:57:18 app_cmdline -- common/autotest_common.sh@974 -- # wait 61340 00:28:12.571 00:28:12.571 real 0m4.595s 00:28:12.571 user 0m5.032s 00:28:12.571 sys 0m0.666s 00:28:12.571 01:57:21 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:12.571 01:57:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:12.571 ************************************ 00:28:12.571 END TEST app_cmdline 00:28:12.571 ************************************ 00:28:12.571 01:57:21 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:28:12.571 01:57:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:12.571 01:57:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:12.571 01:57:21 -- common/autotest_common.sh@10 -- # set +x 00:28:12.571 ************************************ 00:28:12.571 START TEST version 00:28:12.571 ************************************ 00:28:12.571 01:57:21 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:28:12.571 * Looking for test storage... 00:28:12.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:28:12.571 01:57:21 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:12.571 01:57:21 version -- common/autotest_common.sh@1681 -- # lcov --version 00:28:12.571 01:57:21 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:12.571 01:57:21 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:12.571 01:57:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.571 01:57:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.571 01:57:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.571 01:57:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.571 01:57:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.571 01:57:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.571 01:57:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.571 01:57:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.571 01:57:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.571 01:57:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.571 01:57:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.571 01:57:21 version -- scripts/common.sh@344 -- # case "$op" in 00:28:12.571 01:57:21 version -- scripts/common.sh@345 -- # : 1 00:28:12.571 01:57:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.571 01:57:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.571 01:57:21 version -- scripts/common.sh@365 -- # decimal 1 00:28:12.571 01:57:21 version -- scripts/common.sh@353 -- # local d=1 00:28:12.571 01:57:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.571 01:57:21 version -- scripts/common.sh@355 -- # echo 1 00:28:12.571 01:57:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.571 01:57:21 version -- scripts/common.sh@366 -- # decimal 2 00:28:12.571 01:57:21 version -- scripts/common.sh@353 -- # local d=2 00:28:12.571 01:57:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.571 01:57:21 version -- scripts/common.sh@355 -- # echo 2 00:28:12.571 01:57:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.571 01:57:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.571 01:57:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.571 01:57:21 version -- scripts/common.sh@368 -- # return 0 00:28:12.571 01:57:21 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.571 01:57:21 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:12.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.571 --rc genhtml_branch_coverage=1 00:28:12.571 --rc genhtml_function_coverage=1 00:28:12.571 --rc genhtml_legend=1 00:28:12.571 --rc geninfo_all_blocks=1 00:28:12.571 --rc geninfo_unexecuted_blocks=1 00:28:12.571 00:28:12.571 ' 00:28:12.571 01:57:21 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:12.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.571 --rc genhtml_branch_coverage=1 00:28:12.571 --rc genhtml_function_coverage=1 00:28:12.571 --rc genhtml_legend=1 00:28:12.571 --rc geninfo_all_blocks=1 00:28:12.571 --rc geninfo_unexecuted_blocks=1 00:28:12.571 00:28:12.571 ' 00:28:12.571 01:57:21 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:12.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.571 --rc genhtml_branch_coverage=1 00:28:12.571 --rc genhtml_function_coverage=1 00:28:12.571 --rc genhtml_legend=1 00:28:12.571 --rc geninfo_all_blocks=1 00:28:12.571 --rc geninfo_unexecuted_blocks=1 00:28:12.571 00:28:12.571 ' 00:28:12.571 01:57:21 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:12.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.571 --rc genhtml_branch_coverage=1 00:28:12.571 --rc genhtml_function_coverage=1 00:28:12.571 --rc genhtml_legend=1 00:28:12.571 --rc geninfo_all_blocks=1 00:28:12.571 --rc geninfo_unexecuted_blocks=1 00:28:12.571 00:28:12.571 ' 00:28:12.571 01:57:21 version -- app/version.sh@17 -- # get_header_version major 00:28:12.571 01:57:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:12.571 01:57:21 version -- app/version.sh@14 -- # cut -f2 00:28:12.571 01:57:21 version -- app/version.sh@14 -- # tr -d '"' 00:28:12.571 01:57:21 version -- app/version.sh@17 -- # major=25 00:28:12.572 01:57:21 version -- app/version.sh@18 -- # get_header_version minor 00:28:12.572 01:57:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:12.572 01:57:21 version -- app/version.sh@14 -- # cut -f2 00:28:12.572 01:57:21 version -- app/version.sh@14 -- # tr -d '"' 00:28:12.572 01:57:21 version -- app/version.sh@18 -- # minor=1 00:28:12.572 01:57:21 version -- app/version.sh@19 -- # get_header_version patch 00:28:12.572 01:57:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:12.572 01:57:21 version -- app/version.sh@14 -- # tr -d '"' 00:28:12.572 01:57:21 version -- app/version.sh@14 -- # cut -f2 00:28:12.572 01:57:21 version -- app/version.sh@19 -- # patch=0 00:28:12.572 01:57:21 version -- app/version.sh@20 -- # get_header_version suffix 00:28:12.572 01:57:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:12.572 01:57:21 version -- app/version.sh@14 -- # cut -f2 00:28:12.572 01:57:21 version -- app/version.sh@14 -- # tr -d '"' 00:28:12.572 01:57:21 version -- app/version.sh@20 -- # suffix=-pre 00:28:12.572 01:57:21 version -- app/version.sh@22 -- # version=25.1 00:28:12.572 01:57:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:28:12.572 01:57:21 version -- app/version.sh@28 -- # version=25.1rc0 00:28:12.572 01:57:21 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:28:12.572 01:57:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:28:12.830 01:57:21 version -- app/version.sh@30 -- # py_version=25.1rc0 00:28:12.830 01:57:21 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:28:12.830 00:28:12.830 real 0m0.253s 00:28:12.830 user 0m0.168s 00:28:12.830 sys 0m0.120s 00:28:12.830 ************************************ 00:28:12.830 END TEST version 00:28:12.830 ************************************ 00:28:12.830 01:57:21 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:12.830 01:57:21 version -- common/autotest_common.sh@10 -- # set +x 00:28:12.830 01:57:21 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:28:12.830 01:57:21 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:28:12.830 01:57:21 -- spdk/autotest.sh@194 -- # uname -s 00:28:12.830 01:57:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:28:12.830 01:57:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:28:12.830 01:57:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:28:12.830 01:57:21 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:28:12.830 01:57:21 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:12.830 01:57:21 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:12.830 01:57:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:12.830 01:57:21 -- common/autotest_common.sh@10 -- # set +x 00:28:12.830 ************************************ 00:28:12.830 START TEST blockdev_nvme 00:28:12.830 ************************************ 00:28:12.830 01:57:21 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:12.830 * Looking for test storage... 00:28:12.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:12.830 01:57:21 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:12.830 01:57:21 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:28:12.830 01:57:21 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:12.830 01:57:21 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.830 01:57:21 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:28:12.830 01:57:21 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.830 01:57:21 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:12.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.830 --rc genhtml_branch_coverage=1 00:28:12.830 --rc genhtml_function_coverage=1 00:28:12.830 --rc genhtml_legend=1 00:28:12.830 --rc geninfo_all_blocks=1 00:28:12.830 --rc geninfo_unexecuted_blocks=1 00:28:12.830 00:28:12.830 ' 00:28:12.830 01:57:21 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:12.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.830 --rc genhtml_branch_coverage=1 00:28:12.830 --rc genhtml_function_coverage=1 00:28:12.830 --rc genhtml_legend=1 00:28:12.830 --rc geninfo_all_blocks=1 00:28:12.830 --rc geninfo_unexecuted_blocks=1 00:28:12.830 00:28:12.830 ' 00:28:12.830 01:57:21 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:12.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.830 --rc genhtml_branch_coverage=1 00:28:12.830 --rc genhtml_function_coverage=1 00:28:12.830 --rc genhtml_legend=1 00:28:12.830 --rc geninfo_all_blocks=1 00:28:12.830 --rc geninfo_unexecuted_blocks=1 00:28:12.830 00:28:12.831 ' 00:28:12.831 01:57:21 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:12.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.831 --rc genhtml_branch_coverage=1 00:28:12.831 --rc genhtml_function_coverage=1 00:28:12.831 --rc genhtml_legend=1 00:28:12.831 --rc geninfo_all_blocks=1 00:28:12.831 --rc geninfo_unexecuted_blocks=1 00:28:12.831 00:28:12.831 ' 00:28:12.831 01:57:21 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:12.831 01:57:21 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:28:12.831 01:57:21 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:12.831 01:57:21 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:12.831 01:57:21 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:12.831 01:57:21 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:12.831 01:57:21 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:28:12.831 01:57:21 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:28:12.831 01:57:21 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:28:12.831 01:57:21 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:28:12.831 01:57:21 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61534 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61534 00:28:13.089 01:57:21 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:13.089 01:57:21 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 61534 ']' 00:28:13.089 01:57:21 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.089 01:57:21 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:13.089 01:57:21 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.089 01:57:21 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:13.089 01:57:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:13.089 [2024-10-15 01:57:21.957423] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:13.089 [2024-10-15 01:57:21.957792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61534 ] 00:28:13.348 [2024-10-15 01:57:22.140537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.611 [2024-10-15 01:57:22.397399] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.547 01:57:23 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:14.547 01:57:23 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:28:14.547 01:57:23 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:28:14.547 01:57:23 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:28:14.547 01:57:23 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:28:14.547 01:57:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:28:14.547 01:57:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:14.547 01:57:23 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:28:14.547 01:57:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.547 01:57:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:14.547 [2024-10-15 01:57:23.468503] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035017da0 was disconnected and freed. delete nvme_qpair. 00:28:14.547 [2024-10-15 01:57:23.535418] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035000720 was disconnected and freed. delete nvme_qpair. 00:28:14.806 [2024-10-15 01:57:23.608821] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001c4307a0 was disconnected and freed. delete nvme_qpair. 00:28:14.806 [2024-10-15 01:57:23.674701] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x20001c725920 was disconnected and freed. delete nvme_qpair. 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.806 01:57:23 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.806 01:57:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:28:14.806 01:57:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.806 01:57:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.806 01:57:23 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.806 01:57:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:28:14.806 01:57:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.806 01:57:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:14.806 01:57:23 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:28:15.065 01:57:23 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.065 01:57:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:28:15.065 01:57:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:28:15.066 01:57:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "10672a9e-bb26-49ce-9547-90261673a90b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "10672a9e-bb26-49ce-9547-90261673a90b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "86a5f089-81b0-4ecc-88f9-b6c54ce0ba6e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "86a5f089-81b0-4ecc-88f9-b6c54ce0ba6e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9e249163-94c1-4b47-9894-afd3035b7427"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9e249163-94c1-4b47-9894-afd3035b7427",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fe76575d-565d-4d25-8fd3-e39bd7aad97f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe76575d-565d-4d25-8fd3-e39bd7aad97f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e5bad5eb-7246-4896-80e3-720da448a618"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e5bad5eb-7246-4896-80e3-720da448a618",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c8513d07-1ceb-447e-9e6e-275bd7ef45c7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c8513d07-1ceb-447e-9e6e-275bd7ef45c7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:28:15.066 01:57:23 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:28:15.066 01:57:23 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:28:15.066 01:57:23 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:28:15.066 01:57:23 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61534 00:28:15.066 01:57:23 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 61534 ']' 00:28:15.066 01:57:23 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 61534 00:28:15.066 01:57:23 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:28:15.066 01:57:23 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.066 01:57:23 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61534 00:28:15.066 killing process with pid 61534 00:28:15.066 01:57:23 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:15.066 01:57:23 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:15.066 01:57:23 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61534' 00:28:15.066 01:57:23 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 61534 00:28:15.066 01:57:23 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 61534 00:28:17.597 01:57:26 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:17.597 01:57:26 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:17.597 01:57:26 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:17.597 01:57:26 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:17.597 01:57:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:17.597 ************************************ 00:28:17.597 START TEST bdev_hello_world 00:28:17.597 ************************************ 00:28:17.597 01:57:26 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:17.597 [2024-10-15 01:57:26.380080] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:17.597 [2024-10-15 01:57:26.380243] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61629 ] 00:28:17.597 [2024-10-15 01:57:26.543744] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.856 [2024-10-15 01:57:26.782639] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.422 [2024-10-15 01:57:27.216838] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:28:18.422 [2024-10-15 01:57:27.283347] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:28:18.422 [2024-10-15 01:57:27.356724] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001992d8a0 was disconnected and freed. delete nvme_qpair. 00:28:18.422 [2024-10-15 01:57:27.422208] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019905920 was disconnected and freed. delete nvme_qpair. 00:28:18.681 [2024-10-15 01:57:27.440639] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:18.681 [2024-10-15 01:57:27.440704] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:28:18.681 [2024-10-15 01:57:27.440746] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:18.681 [2024-10-15 01:57:27.444064] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:18.681 [2024-10-15 01:57:27.444539] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:18.681 [2024-10-15 01:57:27.444575] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:18.681 [2024-10-15 01:57:27.444815] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:18.681 00:28:18.681 [2024-10-15 01:57:27.444861] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:18.681 [2024-10-15 01:57:27.446612] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019907da0 was disconnected and freed. delete nvme_qpair. 00:28:20.055 00:28:20.055 real 0m2.350s 00:28:20.055 user 0m1.972s 00:28:20.055 sys 0m0.266s 00:28:20.055 01:57:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:20.055 ************************************ 00:28:20.055 END TEST bdev_hello_world 00:28:20.055 ************************************ 00:28:20.055 01:57:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:28:20.055 01:57:28 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:28:20.055 01:57:28 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:20.055 01:57:28 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:20.055 01:57:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:20.055 ************************************ 00:28:20.055 START TEST bdev_bounds 00:28:20.055 ************************************ 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61671 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61671' 00:28:20.055 Process bdevio pid: 61671 00:28:20.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61671 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61671 ']' 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:20.055 01:57:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:20.055 [2024-10-15 01:57:28.817205] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:20.055 [2024-10-15 01:57:28.817428] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61671 ] 00:28:20.055 [2024-10-15 01:57:29.003000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:20.314 [2024-10-15 01:57:29.289331] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.314 [2024-10-15 01:57:29.289466] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.314 [2024-10-15 01:57:29.289703] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.880 [2024-10-15 01:57:29.725439] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:28:20.880 [2024-10-15 01:57:29.792724] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:28:20.880 [2024-10-15 01:57:29.867164] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001992d8a0 was disconnected and freed. delete nvme_qpair. 00:28:21.137 [2024-10-15 01:57:29.944763] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019905920 was disconnected and freed. delete nvme_qpair. 00:28:21.137 01:57:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:21.137 01:57:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:28:21.137 01:57:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:21.137 I/O targets: 00:28:21.137 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:28:21.137 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:28:21.137 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:21.137 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:21.137 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:21.137 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:28:21.137 00:28:21.137 00:28:21.137 CUnit - A unit testing framework for C - Version 2.1-3 00:28:21.137 http://cunit.sourceforge.net/ 00:28:21.137 00:28:21.137 00:28:21.137 Suite: bdevio tests on: Nvme3n1 00:28:21.137 Test: blockdev write read block ...passed 00:28:21.137 Test: blockdev write zeroes read block ...passed 00:28:21.137 Test: blockdev write zeroes read no split ...passed 00:28:21.396 Test: blockdev write zeroes read split ...passed 00:28:21.396 Test: blockdev write zeroes read split partial ...passed 00:28:21.396 Test: blockdev reset ...[2024-10-15 01:57:30.186916] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:28:21.396 [2024-10-15 01:57:30.190994] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0] Resetting controller successful. 00:28:21.396 passed 00:28:21.396 Test: blockdev write read 8 blocks ...passed 00:28:21.396 Test: blockdev write read size > 128k ...passed 00:28:21.396 Test: blockdev write read invalid size ...passed 00:28:21.396 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:21.396 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:21.396 Test: blockdev write read max offset ...passed 00:28:21.396 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:21.396 Test: blockdev writev readv 8 blocks ...passed 00:28:21.396 Test: blockdev writev readv 30 x 1block ...passed 00:28:21.396 Test: blockdev writev readv block ...passed 00:28:21.396 Test: blockdev writev readv size > 128k ...passed 00:28:21.396 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:21.396 Test: blockdev comparev and writev ...[2024-10-15 01:57:30.200040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bec0a000 len:0x1000 00:28:21.396 [2024-10-15 01:57:30.200101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:21.396 passed 00:28:21.396 Test: blockdev nvme passthru rw ...passed 00:28:21.396 Test: blockdev nvme passthru vendor specific ...[2024-10-15 01:57:30.200935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:28:21.396 Test: blockdev nvme admin passthru ...RP2 0x0 00:28:21.396 [2024-10-15 01:57:30.201099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:21.396 passed 00:28:21.396 Test: blockdev copy ...passed 00:28:21.396 Suite: bdevio tests on: Nvme2n3 00:28:21.396 Test: blockdev write read block ...passed 00:28:21.396 Test: blockdev write zeroes read block ...passed 00:28:21.396 Test: blockdev write zeroes read no split ...passed 00:28:21.396 Test: blockdev write zeroes read split ...passed 00:28:21.396 Test: blockdev write zeroes read split partial ...passed 00:28:21.396 Test: blockdev reset ...[2024-10-15 01:57:30.266048] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:28:21.396 [2024-10-15 01:57:30.270438] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0] Resetting controller sucpassed 00:28:21.396 Test: blockdev write read 8 blocks ...cessful. 00:28:21.396 passed 00:28:21.396 Test: blockdev write read size > 128k ...passed 00:28:21.396 Test: blockdev write read invalid size ...passed 00:28:21.396 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:21.396 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:21.396 Test: blockdev write read max offset ...passed 00:28:21.396 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:21.396 Test: blockdev writev readv 8 blocks ...passed 00:28:21.396 Test: blockdev writev readv 30 x 1block ...passed 00:28:21.396 Test: blockdev writev readv block ...passed 00:28:21.396 Test: blockdev writev readv size > 128k ...passed 00:28:21.396 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:21.396 Test: blockdev comparev and writev ...[2024-10-15 01:57:30.279424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a2c04000 len:0x1000 00:28:21.396 [2024-10-15 01:57:30.279625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:28:21.396 Test: blockdev nvme passthru rw ...0 sqhd:0018 p:1 m:0 dnr:1 00:28:21.396 passed 00:28:21.396 Test: blockdev nvme passthru vendor specific ...passed 00:28:21.396 Test: blockdev nvme admin passthru ...[2024-10-15 01:57:30.280461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:21.396 [2024-10-15 01:57:30.280512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:21.396 passed 00:28:21.396 Test: blockdev copy ...passed 00:28:21.396 Suite: bdevio tests on: Nvme2n2 00:28:21.396 Test: blockdev write read block ...passed 00:28:21.396 Test: blockdev write zeroes read block ...passed 00:28:21.396 Test: blockdev write zeroes read no split ...passed 00:28:21.396 Test: blockdev write zeroes read split ...passed 00:28:21.396 Test: blockdev write zeroes read split partial ...passed 00:28:21.396 Test: blockdev reset ...[2024-10-15 01:57:30.347841] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:28:21.396 [2024-10-15 01:57:30.352342] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0] Resetting controller sucpassedcessful. 00:28:21.396 00:28:21.396 Test: blockdev write read 8 blocks ...passed 00:28:21.396 Test: blockdev write read size > 128k ...passed 00:28:21.396 Test: blockdev write read invalid size ...passed 00:28:21.396 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:21.396 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:21.396 Test: blockdev write read max offset ...passed 00:28:21.396 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:21.396 Test: blockdev writev readv 8 blocks ...passed 00:28:21.396 Test: blockdev writev readv 30 x 1block ...passed 00:28:21.396 Test: blockdev writev readv block ...passed 00:28:21.396 Test: blockdev writev readv size > 128k ...passed 00:28:21.396 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:21.396 Test: blockdev comparev and writev ...[2024-10-15 01:57:30.360215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d383a000 len:0x1000 00:28:21.396 [2024-10-15 01:57:30.360275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:21.396 passed 00:28:21.396 Test: blockdev nvme passthru rw ...passed 00:28:21.396 Test: blockdev nvme passthru vendor specific ...passed 00:28:21.396 Test: blockdev nvme admin passthru ...[2024-10-15 01:57:30.361225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:21.396 [2024-10-15 01:57:30.361273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:21.396 passed 00:28:21.396 Test: blockdev copy ...passed 00:28:21.396 Suite: bdevio tests on: Nvme2n1 00:28:21.396 Test: blockdev write read block ...passed 00:28:21.396 Test: blockdev write zeroes read block ...passed 00:28:21.396 Test: blockdev write zeroes read no split ...passed 00:28:21.396 Test: blockdev write zeroes read split ...passed 00:28:21.656 Test: blockdev write zeroes read split partial ...passed 00:28:21.656 Test: blockdev reset ...[2024-10-15 01:57:30.423446] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:28:21.656 passed 00:28:21.656 Test: blockdev write read 8 blocks ...[2024-10-15 01:57:30.427881] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0] Resetting controller successful. 00:28:21.656 passed 00:28:21.656 Test: blockdev write read size > 128k ...passed 00:28:21.656 Test: blockdev write read invalid size ...passed 00:28:21.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:21.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:21.656 Test: blockdev write read max offset ...passed 00:28:21.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:21.656 Test: blockdev writev readv 8 blocks ...passed 00:28:21.656 Test: blockdev writev readv 30 x 1block ...passed 00:28:21.656 Test: blockdev writev readv block ...passed 00:28:21.656 Test: blockdev writev readv size > 128k ...passed 00:28:21.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:21.656 Test: blockdev comparev and writev ...[2024-10-15 01:57:30.435778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:28:21.656 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d3834000 len:0x1000 00:28:21.656 [2024-10-15 01:57:30.435957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:21.656 passed 00:28:21.656 Test: blockdev nvme passthru vendor specific ...passed 00:28:21.656 Test: blockdev nvme admin passthru ...[2024-10-15 01:57:30.436813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:21.656 [2024-10-15 01:57:30.436862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:21.656 passed 00:28:21.656 Test: blockdev copy ...passed 00:28:21.656 Suite: bdevio tests on: Nvme1n1 00:28:21.656 Test: blockdev write read block ...passed 00:28:21.656 Test: blockdev write zeroes read block ...passed 00:28:21.656 Test: blockdev write zeroes read no split ...passed 00:28:21.656 Test: blockdev write zeroes read split ...passed 00:28:21.656 Test: blockdev write zeroes read split partial ...passed 00:28:21.656 Test: blockdev reset ...[2024-10-15 01:57:30.505624] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:28:21.656 [2024-10-15 01:57:30.509650] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0] Resetting controller successful. 00:28:21.656 passed 00:28:21.656 Test: blockdev write read 8 blocks ...passed 00:28:21.656 Test: blockdev write read size > 128k ...passed 00:28:21.656 Test: blockdev write read invalid size ...passed 00:28:21.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:21.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:21.656 Test: blockdev write read max offset ...passed 00:28:21.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:21.656 Test: blockdev writev readv 8 blocks ...passed 00:28:21.656 Test: blockdev writev readv 30 x 1block ...passed 00:28:21.656 Test: blockdev writev readv block ...passed 00:28:21.656 Test: blockdev writev readv size > 128k ...passed 00:28:21.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:21.656 Test: blockdev comparev and writev ...[2024-10-15 01:57:30.518474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:28:21.656 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d3830000 len:0x1000 00:28:21.656 [2024-10-15 01:57:30.518681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:21.656 passed 00:28:21.656 Test: blockdev nvme passthru vendor specific ...passed 00:28:21.656 Test: blockdev nvme admin passthru ...[2024-10-15 01:57:30.519446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:21.656 [2024-10-15 01:57:30.519498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:21.656 passed 00:28:21.656 Test: blockdev copy ...passed 00:28:21.656 Suite: bdevio tests on: Nvme0n1 00:28:21.656 Test: blockdev write read block ...passed 00:28:21.656 Test: blockdev write zeroes read block ...passed 00:28:21.656 Test: blockdev write zeroes read no split ...passed 00:28:21.656 Test: blockdev write zeroes read split ...passed 00:28:21.656 Test: blockdev write zeroes read split partial ...passed 00:28:21.656 Test: blockdev reset ...[2024-10-15 01:57:30.588339] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:28:21.656 passed 00:28:21.656 Test: blockdev write read 8 blocks ...[2024-10-15 01:57:30.592178] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0] Resetting controller successful. 00:28:21.656 passed 00:28:21.656 Test: blockdev write read size > 128k ...passed 00:28:21.656 Test: blockdev write read invalid size ...passed 00:28:21.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:21.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:21.656 Test: blockdev write read max offset ...passed 00:28:21.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:21.656 Test: blockdev writev readv 8 blocks ...passed 00:28:21.656 Test: blockdev writev readv 30 x 1block ...passed 00:28:21.656 Test: blockdev writev readv block ...passed 00:28:21.656 Test: blockdev writev readv size > 128k ...passed 00:28:21.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:21.656 Test: blockdev comparev and writev ...passed 00:28:21.656 Test: blockdev nvme passthru rw ...[2024-10-15 01:57:30.599572] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:28:21.656 separate metadata which is not supported yet. 00:28:21.656 passed 00:28:21.656 Test: blockdev nvme passthru vendor specific ...passed 00:28:21.656 Test: blockdev nvme admin passthru ...[2024-10-15 01:57:30.600048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:28:21.656 [2024-10-15 01:57:30.600109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:28:21.656 passed 00:28:21.656 Test: blockdev copy ...passed 00:28:21.656 00:28:21.656 Run Summary: Type Total Ran Passed Failed Inactive 00:28:21.656 suites 6 6 n/a 0 0 00:28:21.656 tests 138 138 138 0 0 00:28:21.656 asserts 893 893 893 0 n/a 00:28:21.656 00:28:21.656 Elapsed time = 1.280 seconds 00:28:21.656 [2024-10-15 01:57:30.612298] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019e0dda0 was disconnected and freed. delete nvme_qpair. 00:28:21.656 [2024-10-15 01:57:30.613577] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200019e3eda0 was disconnected and freed. delete nvme_qpair. 00:28:21.656 0 00:28:21.656 [2024-10-15 01:57:30.614807] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d22da0 was disconnected and freed. delete nvme_qpair. 00:28:21.656 [2024-10-15 01:57:30.616883] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20000083dda0 was disconnected and freed. delete nvme_qpair. 00:28:21.656 01:57:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61671 00:28:21.656 01:57:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61671 ']' 00:28:21.656 01:57:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61671 00:28:21.656 01:57:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:28:21.656 01:57:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:21.656 01:57:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61671 00:28:21.915 01:57:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:21.915 01:57:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:21.915 01:57:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61671' 00:28:21.915 killing process with pid 61671 00:28:21.915 01:57:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61671 00:28:21.915 01:57:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61671 00:28:22.850 01:57:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:28:22.850 00:28:22.850 real 0m2.917s 00:28:22.850 user 0m6.982s 00:28:22.850 sys 0m0.467s 00:28:22.850 01:57:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:22.850 ************************************ 00:28:22.850 END TEST bdev_bounds 00:28:22.850 ************************************ 00:28:22.850 01:57:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:22.850 01:57:31 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:28:22.850 01:57:31 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:22.850 01:57:31 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:22.850 01:57:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:22.850 ************************************ 00:28:22.850 START TEST bdev_nbd 00:28:22.850 ************************************ 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61736 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61736 /var/tmp/spdk-nbd.sock 00:28:22.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61736 ']' 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:22.851 01:57:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:22.851 [2024-10-15 01:57:31.776029] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:22.851 [2024-10-15 01:57:31.776247] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.109 [2024-10-15 01:57:31.945698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.368 [2024-10-15 01:57:32.202200] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.935 [2024-10-15 01:57:32.664112] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:28:23.935 [2024-10-15 01:57:32.730770] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:28:23.935 [2024-10-15 01:57:32.804433] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001992d8a0 was disconnected and freed. delete nvme_qpair. 00:28:23.935 [2024-10-15 01:57:32.870045] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019905920 was disconnected and freed. delete nvme_qpair. 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:23.935 01:57:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:24.501 1+0 records in 00:28:24.501 1+0 records out 00:28:24.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529227 s, 7.7 MB/s 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:24.501 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:24.502 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:24.502 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:24.502 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:24.502 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:28:24.759 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:24.760 1+0 records in 00:28:24.760 1+0 records out 00:28:24.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523519 s, 7.8 MB/s 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:24.760 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:25.018 1+0 records in 00:28:25.018 1+0 records out 00:28:25.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775217 s, 5.3 MB/s 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:25.018 01:57:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:25.276 1+0 records in 00:28:25.276 1+0 records out 00:28:25.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495588 s, 8.3 MB/s 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:25.276 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:28:25.533 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:28:25.533 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:28:25.533 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:28:25.533 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:28:25.533 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:25.533 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:25.533 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:25.533 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:28:25.533 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:25.534 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:25.534 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:25.534 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:25.534 1+0 records in 00:28:25.534 1+0 records out 00:28:25.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718208 s, 5.7 MB/s 00:28:25.534 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.534 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:25.534 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.534 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:25.534 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:25.534 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:25.534 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:25.534 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:28:25.792 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:26.050 1+0 records in 00:28:26.050 1+0 records out 00:28:26.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742576 s, 5.5 MB/s 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:26.050 01:57:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:26.344 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:26.344 { 00:28:26.344 "nbd_device": "/dev/nbd0", 00:28:26.344 "bdev_name": "Nvme0n1" 00:28:26.344 }, 00:28:26.344 { 00:28:26.344 "nbd_device": "/dev/nbd1", 00:28:26.344 "bdev_name": "Nvme1n1" 00:28:26.344 }, 00:28:26.344 { 00:28:26.344 "nbd_device": "/dev/nbd2", 00:28:26.344 "bdev_name": "Nvme2n1" 00:28:26.344 }, 00:28:26.344 { 00:28:26.344 "nbd_device": "/dev/nbd3", 00:28:26.344 "bdev_name": "Nvme2n2" 00:28:26.344 }, 00:28:26.344 { 00:28:26.344 "nbd_device": "/dev/nbd4", 00:28:26.344 "bdev_name": "Nvme2n3" 00:28:26.344 }, 00:28:26.344 { 00:28:26.344 "nbd_device": "/dev/nbd5", 00:28:26.344 "bdev_name": "Nvme3n1" 00:28:26.344 } 00:28:26.344 ]' 00:28:26.344 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:26.344 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:26.344 { 00:28:26.344 "nbd_device": "/dev/nbd0", 00:28:26.344 "bdev_name": "Nvme0n1" 00:28:26.344 }, 00:28:26.344 { 00:28:26.344 "nbd_device": "/dev/nbd1", 00:28:26.344 "bdev_name": "Nvme1n1" 00:28:26.344 }, 00:28:26.344 { 00:28:26.344 "nbd_device": "/dev/nbd2", 00:28:26.344 "bdev_name": "Nvme2n1" 00:28:26.344 }, 00:28:26.344 { 00:28:26.344 "nbd_device": "/dev/nbd3", 00:28:26.344 "bdev_name": "Nvme2n2" 00:28:26.344 }, 00:28:26.344 { 00:28:26.344 "nbd_device": "/dev/nbd4", 00:28:26.344 "bdev_name": "Nvme2n3" 00:28:26.344 }, 00:28:26.344 { 00:28:26.345 "nbd_device": "/dev/nbd5", 00:28:26.345 "bdev_name": "Nvme3n1" 00:28:26.345 } 00:28:26.345 ]' 00:28:26.345 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:26.345 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:28:26.345 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:26.345 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:28:26.345 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:26.345 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:26.345 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:26.345 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:26.656 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:26.656 [2024-10-15 01:57:35.489020] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019907da0 was disconnected and freed. delete nvme_qpair. 00:28:26.656 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:26.656 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:26.656 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:26.656 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:26.656 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:26.656 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:26.656 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:26.656 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:26.656 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:26.914 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:26.914 [2024-10-15 01:57:35.800358] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x2000002d43a0 was disconnected and freed. delete nvme_qpair. 00:28:26.914 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:26.914 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:26.914 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:26.914 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:26.914 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:26.914 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:26.914 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:26.914 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:26.914 01:57:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:28:27.173 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:28:27.173 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:28:27.173 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:28:27.173 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:27.173 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:27.173 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:28:27.173 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:27.173 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:27.173 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:27.173 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:28:27.431 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:28:27.431 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:28:27.431 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:28:27.431 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:27.431 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:27.431 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:28:27.431 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:27.431 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:27.431 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:27.431 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:28:27.997 [2024-10-15 01:57:36.709849] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200019fffda0 was disconnected and freed. delete nvme_qpair. 00:28:27.997 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:28:27.997 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:28:27.997 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:28:27.997 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:27.997 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:27.997 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:28:27.997 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:27.997 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:27.997 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:27.997 01:57:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:28:28.255 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:28:28.255 [2024-10-15 01:57:37.011546] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019f3eda0 was disconnected and freed. delete nvme_qpair. 00:28:28.255 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:28:28.255 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:28:28.255 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:28.255 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:28.255 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:28:28.255 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:28.255 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:28.255 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:28.255 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:28.255 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:28.513 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:28:28.771 /dev/nbd0 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:28.771 1+0 records in 00:28:28.771 1+0 records out 00:28:28.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458427 s, 8.9 MB/s 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:28.771 01:57:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:28:29.029 /dev/nbd1 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:29.287 1+0 records in 00:28:29.287 1+0 records out 00:28:29.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434621 s, 9.4 MB/s 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:29.287 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:28:29.546 /dev/nbd10 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:29.546 1+0 records in 00:28:29.546 1+0 records out 00:28:29.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055804 s, 7.3 MB/s 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:29.546 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:28:29.805 /dev/nbd11 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:29.805 1+0 records in 00:28:29.805 1+0 records out 00:28:29.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686578 s, 6.0 MB/s 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:29.805 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:28:30.063 /dev/nbd12 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:30.063 1+0 records in 00:28:30.063 1+0 records out 00:28:30.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000858195 s, 4.8 MB/s 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:30.063 01:57:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:28:30.323 /dev/nbd13 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:30.323 1+0 records in 00:28:30.323 1+0 records out 00:28:30.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729426 s, 5.6 MB/s 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.323 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:30.581 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd0", 00:28:30.582 "bdev_name": "Nvme0n1" 00:28:30.582 }, 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd1", 00:28:30.582 "bdev_name": "Nvme1n1" 00:28:30.582 }, 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd10", 00:28:30.582 "bdev_name": "Nvme2n1" 00:28:30.582 }, 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd11", 00:28:30.582 "bdev_name": "Nvme2n2" 00:28:30.582 }, 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd12", 00:28:30.582 "bdev_name": "Nvme2n3" 00:28:30.582 }, 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd13", 00:28:30.582 "bdev_name": "Nvme3n1" 00:28:30.582 } 00:28:30.582 ]' 00:28:30.582 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd0", 00:28:30.582 "bdev_name": "Nvme0n1" 00:28:30.582 }, 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd1", 00:28:30.582 "bdev_name": "Nvme1n1" 00:28:30.582 }, 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd10", 00:28:30.582 "bdev_name": "Nvme2n1" 00:28:30.582 }, 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd11", 00:28:30.582 "bdev_name": "Nvme2n2" 00:28:30.582 }, 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd12", 00:28:30.582 "bdev_name": "Nvme2n3" 00:28:30.582 }, 00:28:30.582 { 00:28:30.582 "nbd_device": "/dev/nbd13", 00:28:30.582 "bdev_name": "Nvme3n1" 00:28:30.582 } 00:28:30.582 ]' 00:28:30.582 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:30.841 /dev/nbd1 00:28:30.841 /dev/nbd10 00:28:30.841 /dev/nbd11 00:28:30.841 /dev/nbd12 00:28:30.841 /dev/nbd13' 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:30.841 /dev/nbd1 00:28:30.841 /dev/nbd10 00:28:30.841 /dev/nbd11 00:28:30.841 /dev/nbd12 00:28:30.841 /dev/nbd13' 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:30.841 256+0 records in 00:28:30.841 256+0 records out 00:28:30.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105803 s, 99.1 MB/s 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:30.841 256+0 records in 00:28:30.841 256+0 records out 00:28:30.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131998 s, 7.9 MB/s 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:30.841 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:31.100 256+0 records in 00:28:31.100 256+0 records out 00:28:31.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149745 s, 7.0 MB/s 00:28:31.100 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:31.100 01:57:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:28:31.358 256+0 records in 00:28:31.358 256+0 records out 00:28:31.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166066 s, 6.3 MB/s 00:28:31.358 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:31.359 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:28:31.359 256+0 records in 00:28:31.359 256+0 records out 00:28:31.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165147 s, 6.3 MB/s 00:28:31.359 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:31.359 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:28:31.617 256+0 records in 00:28:31.617 256+0 records out 00:28:31.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155203 s, 6.8 MB/s 00:28:31.617 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:31.617 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:28:31.617 256+0 records in 00:28:31.617 256+0 records out 00:28:31.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162342 s, 6.5 MB/s 00:28:31.617 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:28:31.617 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:31.617 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:31.617 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:31.617 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:31.617 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:31.617 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:31.617 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:31.617 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:31.876 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:32.134 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:32.134 [2024-10-15 01:57:40.989995] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019907da0 was disconnected and freed. delete nvme_qpair. 00:28:32.134 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:32.134 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:32.134 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.134 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.134 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:32.134 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:32.134 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.134 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.134 01:57:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:32.393 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:32.393 [2024-10-15 01:57:41.276872] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x2000002d43a0 was disconnected and freed. delete nvme_qpair. 00:28:32.393 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:32.393 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:32.393 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.393 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.393 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:32.393 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:32.393 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.393 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.393 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:28:32.652 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:28:32.652 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:28:32.652 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:28:32.652 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.652 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.652 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:28:32.652 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:32.652 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.652 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.652 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:28:33.224 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:28:33.224 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:28:33.224 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:28:33.224 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:33.224 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:33.224 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:28:33.224 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:33.224 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:33.224 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:33.224 01:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:28:33.484 [2024-10-15 01:57:42.376335] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200019fffda0 was disconnected and freed. delete nvme_qpair. 00:28:33.484 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:28:33.484 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:28:33.484 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:28:33.484 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:33.484 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:33.484 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:28:33.484 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:33.484 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:33.484 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:33.484 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:28:33.743 [2024-10-15 01:57:42.697341] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019f3eda0 was disconnected and freed. delete nvme_qpair. 00:28:33.743 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:28:33.743 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:28:33.743 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:28:33.743 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:33.743 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:33.743 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:28:33.743 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:33.743 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:33.743 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:33.743 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:33.743 01:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:34.309 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:34.309 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:34.309 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:28:34.310 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:34.568 malloc_lvol_verify 00:28:34.568 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:34.826 750b70e3-6838-41d5-b02a-a1366e697944 00:28:34.826 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:35.084 95080af1-e649-4318-b9a6-c623ccb7057e 00:28:35.084 01:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:35.342 /dev/nbd0 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:28:35.342 mke2fs 1.47.0 (5-Feb-2023) 00:28:35.342 Discarding device blocks: 0/4096 done 00:28:35.342 Creating filesystem with 4096 1k blocks and 1024 inodes 00:28:35.342 00:28:35.342 Allocating group tables: 0/1 done 00:28:35.342 Writing inode tables: 0/1 done 00:28:35.342 Creating journal (1024 blocks): done 00:28:35.342 Writing superblocks and filesystem accounting information: 0/1 done 00:28:35.342 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:35.342 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61736 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61736 ']' 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61736 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.600 01:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61736 00:28:35.858 killing process with pid 61736 00:28:35.858 01:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:35.858 01:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:35.858 01:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61736' 00:28:35.858 01:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61736 00:28:35.858 01:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61736 00:28:37.249 ************************************ 00:28:37.249 END TEST bdev_nbd 00:28:37.249 ************************************ 00:28:37.249 01:57:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:28:37.249 00:28:37.249 real 0m14.243s 00:28:37.249 user 0m20.363s 00:28:37.249 sys 0m4.481s 00:28:37.249 01:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:37.249 01:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:37.249 01:57:45 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:28:37.249 01:57:45 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:28:37.249 skipping fio tests on NVMe due to multi-ns failures. 00:28:37.249 01:57:45 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:28:37.249 01:57:45 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:37.249 01:57:45 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:37.249 01:57:45 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:28:37.249 01:57:45 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:37.249 01:57:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:37.249 ************************************ 00:28:37.249 START TEST bdev_verify 00:28:37.249 ************************************ 00:28:37.249 01:57:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:37.249 [2024-10-15 01:57:46.070843] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:37.249 [2024-10-15 01:57:46.071027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62158 ] 00:28:37.249 [2024-10-15 01:57:46.250087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:37.815 [2024-10-15 01:57:46.550223] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.815 [2024-10-15 01:57:46.550249] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.073 [2024-10-15 01:57:47.035158] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:28:38.332 [2024-10-15 01:57:47.104664] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:28:38.332 [2024-10-15 01:57:47.180634] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001992d8a0 was disconnected and freed. delete nvme_qpair. 00:28:38.332 [2024-10-15 01:57:47.248893] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019905920 was disconnected and freed. delete nvme_qpair. 00:28:38.332 Running I/O for 5 seconds... 00:28:40.642 18880.00 IOPS, 73.75 MiB/s [2024-10-15T01:57:50.588Z] 18432.00 IOPS, 72.00 MiB/s [2024-10-15T01:57:51.984Z] 18858.67 IOPS, 73.67 MiB/s [2024-10-15T01:57:52.552Z] 18880.00 IOPS, 73.75 MiB/s [2024-10-15T01:57:52.552Z] [2024-10-15 01:57:52.409540] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019de3920 was disconnected and freed. delete nvme_qpair. 00:28:43.540 [2024-10-15 01:57:52.411118] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019a005e0 was disconnected and freed. delete nvme_qpair. 00:28:43.540 [2024-10-15 01:57:52.414740] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200013800aa0 was disconnected and freed. delete nvme_qpair. 00:28:43.540 [2024-10-15 01:57:52.416240] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200007000860 was disconnected and freed. delete nvme_qpair. 00:28:43.540 [2024-10-15 01:57:52.417703] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001a400120 was disconnected and freed. delete nvme_qpair. 00:28:43.540 [2024-10-15 01:57:52.419160] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x20001a6004e0 was disconnected and freed. delete nvme_qpair. 00:28:43.540 [2024-10-15 01:57:52.428866] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a600220 was disconnected and freed. delete nvme_qpair. 00:28:43.540 [2024-10-15 01:57:52.430476] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x2000002d53a0 was disconnected and freed. delete nvme_qpair. 00:28:43.540 [2024-10-15 01:57:52.437722] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001adffe20 was disconnected and freed. delete nvme_qpair. 00:28:43.540 [2024-10-15 01:57:52.439397] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001ad3eda0 was disconnected and freed. delete nvme_qpair. 00:28:43.540 [2024-10-15 01:57:52.444914] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001afffe20 was disconnected and freed. delete nvme_qpair. 00:28:43.540 18752.00 IOPS, 73.25 MiB/s [2024-10-15T01:57:52.552Z] [2024-10-15 01:57:52.456737] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x20001af3eda0 was disconnected and freed. delete nvme_qpair. 00:28:43.540 00:28:43.540 Latency(us) 00:28:43.540 [2024-10-15T01:57:52.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.540 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0x0 length 0xbd0bd 00:28:43.540 Nvme0n1 : 5.07 1551.79 6.06 0.00 0.00 82093.28 10843.23 77689.95 00:28:43.540 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:28:43.540 Nvme0n1 : 5.06 1542.10 6.02 0.00 0.00 82814.10 16681.89 92941.96 00:28:43.540 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0x0 length 0xa0000 00:28:43.540 Nvme1n1 : 5.08 1550.91 6.06 0.00 0.00 81992.63 9651.67 75306.82 00:28:43.540 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0xa0000 length 0xa0000 00:28:43.540 Nvme1n1 : 5.07 1540.81 6.02 0.00 0.00 82725.25 19184.17 90558.84 00:28:43.540 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0x0 length 0x80000 00:28:43.540 Nvme2n1 : 5.09 1558.39 6.09 0.00 0.00 81667.56 9532.51 72923.69 00:28:43.540 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0x80000 length 0x80000 00:28:43.540 Nvme2n1 : 5.07 1540.19 6.02 0.00 0.00 82530.23 18707.55 87699.08 00:28:43.540 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0x0 length 0x80000 00:28:43.540 Nvme2n2 : 5.10 1557.02 6.08 0.00 0.00 81573.71 12392.26 71017.19 00:28:43.540 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0x80000 length 0x80000 00:28:43.540 Nvme2n2 : 5.07 1539.59 6.01 0.00 0.00 82407.09 18826.71 83886.08 00:28:43.540 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0x0 length 0x80000 00:28:43.540 Nvme2n3 : 5.10 1556.44 6.08 0.00 0.00 81455.36 12690.15 73876.95 00:28:43.540 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0x80000 length 0x80000 00:28:43.540 Nvme2n3 : 5.07 1539.01 6.01 0.00 0.00 82293.18 16086.11 84839.33 00:28:43.540 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0x0 length 0x20000 00:28:43.540 Nvme3n1 : 5.10 1555.80 6.08 0.00 0.00 81340.60 10604.92 77213.32 00:28:43.540 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:43.540 Verification LBA range: start 0x20000 length 0x20000 00:28:43.540 Nvme3n1 : 5.08 1538.14 6.01 0.00 0.00 82188.29 11141.12 90082.21 00:28:43.540 [2024-10-15T01:57:52.552Z] =================================================================================================================== 00:28:43.540 [2024-10-15T01:57:52.552Z] Total : 18570.19 72.54 0.00 0.00 82086.89 9532.51 92941.96 00:28:45.440 00:28:45.440 real 0m7.993s 00:28:45.440 user 0m14.378s 00:28:45.440 sys 0m0.361s 00:28:45.440 01:57:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:45.440 01:57:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:28:45.440 ************************************ 00:28:45.440 END TEST bdev_verify 00:28:45.440 ************************************ 00:28:45.440 01:57:53 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:45.440 01:57:53 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:28:45.440 01:57:53 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:45.440 01:57:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:45.440 ************************************ 00:28:45.440 START TEST bdev_verify_big_io 00:28:45.440 ************************************ 00:28:45.440 01:57:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:45.440 [2024-10-15 01:57:54.104664] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:45.440 [2024-10-15 01:57:54.104839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62262 ] 00:28:45.440 [2024-10-15 01:57:54.271104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:45.698 [2024-10-15 01:57:54.516810] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.698 [2024-10-15 01:57:54.516817] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.998 [2024-10-15 01:57:54.953746] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:28:46.257 [2024-10-15 01:57:55.020803] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:28:46.257 [2024-10-15 01:57:55.094747] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001992d8a0 was disconnected and freed. delete nvme_qpair. 00:28:46.257 [2024-10-15 01:57:55.161242] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019905920 was disconnected and freed. delete nvme_qpair. 00:28:46.514 Running I/O for 5 seconds... 00:28:51.705 1902.00 IOPS, 118.88 MiB/s [2024-10-15T01:58:01.285Z] 3076.00 IOPS, 192.25 MiB/s [2024-10-15T01:58:01.285Z] [2024-10-15 01:58:01.188288] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a100260 was disconnected and freed. delete nvme_qpair. 00:28:52.273 [2024-10-15 01:58:01.189881] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019600260 was disconnected and freed. delete nvme_qpair. 00:28:52.273 [2024-10-15 01:58:01.191227] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a1d72a0 was disconnected and freed. delete nvme_qpair. 00:28:52.273 [2024-10-15 01:58:01.192607] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001a1cf2a0 was disconnected and freed. delete nvme_qpair. 00:28:52.273 [2024-10-15 01:58:01.197349] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x2000199f72a0 was disconnected and freed. delete nvme_qpair. 00:28:52.273 [2024-10-15 01:58:01.199471] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200019d00220 was disconnected and freed. delete nvme_qpair. 00:28:52.273 [2024-10-15 01:58:01.200790] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001a1d4260 was disconnected and freed. delete nvme_qpair. 00:28:52.273 [2024-10-15 01:58:01.206088] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x2000199fc260 was disconnected and freed. delete nvme_qpair. 00:28:52.273 [2024-10-15 01:58:01.209314] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200019deb2a0 was disconnected and freed. delete nvme_qpair. 00:28:52.273 [2024-10-15 01:58:01.213711] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019de31a0 was disconnected and freed. delete nvme_qpair. 00:28:52.273 [2024-10-15 01:58:01.217563] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001992f160 was disconnected and freed. delete nvme_qpair. 00:28:52.273 [2024-10-15 01:58:01.227079] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200003aff160 was disconnected and freed. delete nvme_qpair. 00:28:52.273 3458.67 IOPS, 216.17 MiB/s 00:28:52.273 Latency(us) 00:28:52.273 [2024-10-15T01:58:01.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.273 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0x0 length 0xbd0b 00:28:52.273 Nvme0n1 : 5.61 154.01 9.63 0.00 0.00 810672.05 33840.41 934185.89 00:28:52.273 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0xbd0b length 0xbd0b 00:28:52.273 Nvme0n1 : 5.71 129.89 8.12 0.00 0.00 946256.78 28955.00 937998.89 00:28:52.273 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0x0 length 0xa000 00:28:52.273 Nvme1n1 : 5.71 153.13 9.57 0.00 0.00 783780.29 51713.86 808356.77 00:28:52.273 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0xa000 length 0xa000 00:28:52.273 Nvme1n1 : 5.71 131.42 8.21 0.00 0.00 913333.21 68157.44 880803.84 00:28:52.273 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0x0 length 0x8000 00:28:52.273 Nvme2n1 : 5.82 154.77 9.67 0.00 0.00 749902.37 95801.72 762600.73 00:28:52.273 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0x8000 length 0x8000 00:28:52.273 Nvme2n1 : 5.71 134.46 8.40 0.00 0.00 874055.37 101521.22 896055.85 00:28:52.273 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0x0 length 0x8000 00:28:52.273 Nvme2n2 : 5.71 156.82 9.80 0.00 0.00 729401.05 95325.09 762600.73 00:28:52.273 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0x8000 length 0x8000 00:28:52.273 Nvme2n2 : 5.81 135.69 8.48 0.00 0.00 836149.65 100567.97 911307.87 00:28:52.273 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0x0 length 0x8000 00:28:52.273 Nvme2n3 : 5.87 170.50 10.66 0.00 0.00 658398.80 7149.38 758787.72 00:28:52.273 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0x8000 length 0x8000 00:28:52.273 Nvme2n3 : 5.87 149.17 9.32 0.00 0.00 751412.11 9889.98 930372.89 00:28:52.273 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0x0 length 0x2000 00:28:52.273 Nvme3n1 : 5.88 173.94 10.87 0.00 0.00 626939.91 5719.51 777852.74 00:28:52.273 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:52.273 Verification LBA range: start 0x2000 length 0x2000 00:28:52.273 Nvme3n1 : 5.87 152.68 9.54 0.00 0.00 713153.25 6613.18 953250.91 00:28:52.273 [2024-10-15T01:58:01.285Z] =================================================================================================================== 00:28:52.273 [2024-10-15T01:58:01.285Z] Total : 1796.49 112.28 0.00 0.00 773644.33 5719.51 953250.91 00:28:54.222 00:28:54.222 real 0m9.081s 00:28:54.222 user 0m16.641s 00:28:54.222 sys 0m0.362s 00:28:54.222 01:58:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.222 ************************************ 00:28:54.222 END TEST bdev_verify_big_io 00:28:54.222 ************************************ 00:28:54.222 01:58:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:28:54.222 01:58:03 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:54.222 01:58:03 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:28:54.223 01:58:03 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.223 01:58:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:54.223 ************************************ 00:28:54.223 START TEST bdev_write_zeroes 00:28:54.223 ************************************ 00:28:54.223 01:58:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:54.481 [2024-10-15 01:58:03.299970] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:54.481 [2024-10-15 01:58:03.300158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62382 ] 00:28:54.481 [2024-10-15 01:58:03.467688] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.740 [2024-10-15 01:58:03.712245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.307 [2024-10-15 01:58:04.166705] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:28:55.307 [2024-10-15 01:58:04.234260] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:28:55.307 [2024-10-15 01:58:04.307903] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001992d8a0 was disconnected and freed. delete nvme_qpair. 00:28:55.565 [2024-10-15 01:58:04.374665] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019905920 was disconnected and freed. delete nvme_qpair. 00:28:55.565 Running I/O for 1 seconds... 00:28:56.545 51456.00 IOPS, 201.00 MiB/s 00:28:56.545 Latency(us) 00:28:56.545 [2024-10-15T01:58:05.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.545 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:56.545 Nvme0n1 : 1.03 8502.23 33.21 0.00 0.00 15013.90 11558.17 28716.68 00:28:56.545 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:56.545 Nvme1n1 : 1.03 8488.12 33.16 0.00 0.00 15012.82 12094.37 28001.75 00:28:56.545 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:56.545 Nvme2n1 : 1.03 8474.27 33.10 0.00 0.00 14975.28 11677.32 27167.65 00:28:56.545 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:56.545 Nvme2n2 : 1.04 8460.46 33.05 0.00 0.00 14937.88 11141.12 26452.71 00:28:56.545 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:56.545 Nvme2n3 : 1.04 8446.89 33.00 0.00 0.00 14916.97 8638.84 27286.81 00:28:56.545 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:56.545 Nvme3n1 : 1.04 8433.32 32.94 0.00 0.00 14904.54 7923.90 29312.47 00:28:56.545 [2024-10-15T01:58:05.557Z] =================================================================================================================== 00:28:56.545 [2024-10-15T01:58:05.557Z] Total : 50805.28 198.46 0.00 0.00 14960.23 7923.90 29312.47 00:28:56.545 [2024-10-15 01:58:05.493016] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019931160 was disconnected and freed. delete nvme_qpair. 00:28:56.545 [2024-10-15 01:58:05.494572] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x2000002d4aa0 was disconnected and freed. delete nvme_qpair. 00:28:56.545 [2024-10-15 01:58:05.496138] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x2000009ff160 was disconnected and freed. delete nvme_qpair. 00:28:56.545 [2024-10-15 01:58:05.497722] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20000093eda0 was disconnected and freed. delete nvme_qpair. 00:28:56.545 [2024-10-15 01:58:05.499265] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x2000005ffbe0 was disconnected and freed. delete nvme_qpair. 00:28:56.545 [2024-10-15 01:58:05.504973] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x20000053eda0 was disconnected and freed. delete nvme_qpair. 00:28:58.448 00:28:58.448 real 0m3.785s 00:28:58.448 user 0m3.324s 00:28:58.448 sys 0m0.333s 00:28:58.448 01:58:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.448 01:58:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:28:58.448 ************************************ 00:28:58.448 END TEST bdev_write_zeroes 00:28:58.448 ************************************ 00:28:58.448 01:58:06 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:58.448 01:58:06 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:28:58.448 01:58:06 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.448 01:58:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:58.448 ************************************ 00:28:58.448 START TEST bdev_json_nonenclosed 00:28:58.448 ************************************ 00:28:58.448 01:58:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:58.448 [2024-10-15 01:58:07.091166] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:58.448 [2024-10-15 01:58:07.091461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62435 ] 00:28:58.448 [2024-10-15 01:58:07.261486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.707 [2024-10-15 01:58:07.512240] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.707 [2024-10-15 01:58:07.512366] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:58.707 [2024-10-15 01:58:07.512395] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:58.707 [2024-10-15 01:58:07.512435] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:58.965 00:28:58.965 real 0m0.986s 00:28:58.965 user 0m0.723s 00:28:58.965 sys 0m0.156s 00:28:58.965 01:58:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.965 01:58:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:28:58.965 ************************************ 00:28:58.965 END TEST bdev_json_nonenclosed 00:28:58.965 ************************************ 00:28:59.223 01:58:08 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:59.223 01:58:08 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:28:59.223 01:58:08 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:59.223 01:58:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:59.223 ************************************ 00:28:59.223 START TEST bdev_json_nonarray 00:28:59.223 ************************************ 00:28:59.223 01:58:08 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:59.223 [2024-10-15 01:58:08.150853] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:28:59.223 [2024-10-15 01:58:08.151040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62472 ] 00:28:59.482 [2024-10-15 01:58:08.333201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.740 [2024-10-15 01:58:08.612548] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.740 [2024-10-15 01:58:08.612779] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:59.740 [2024-10-15 01:58:08.612810] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:59.740 [2024-10-15 01:58:08.612824] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:00.307 00:29:00.307 real 0m1.014s 00:29:00.307 user 0m0.723s 00:29:00.307 sys 0m0.184s 00:29:00.307 01:58:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:00.307 ************************************ 00:29:00.307 END TEST bdev_json_nonarray 00:29:00.307 ************************************ 00:29:00.307 01:58:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:00.307 01:58:09 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:29:00.307 01:58:09 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:29:00.307 01:58:09 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:29:00.307 01:58:09 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:29:00.307 01:58:09 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:29:00.307 01:58:09 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:00.307 01:58:09 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:00.307 01:58:09 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:29:00.307 01:58:09 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:29:00.307 01:58:09 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:29:00.307 01:58:09 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:29:00.307 00:29:00.307 real 0m47.430s 00:29:00.307 user 1m9.884s 00:29:00.307 sys 0m7.653s 00:29:00.307 01:58:09 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:00.307 01:58:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:00.307 ************************************ 00:29:00.307 END TEST blockdev_nvme 00:29:00.307 ************************************ 00:29:00.307 01:58:09 -- spdk/autotest.sh@209 -- # uname -s 00:29:00.307 01:58:09 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:29:00.307 01:58:09 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:00.307 01:58:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:00.307 01:58:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:00.307 01:58:09 -- common/autotest_common.sh@10 -- # set +x 00:29:00.307 ************************************ 00:29:00.307 START TEST blockdev_nvme_gpt 00:29:00.307 ************************************ 00:29:00.307 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:00.307 * Looking for test storage... 00:29:00.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:00.307 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:00.307 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:29:00.307 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:00.566 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.566 01:58:09 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:29:00.566 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.566 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.566 --rc genhtml_branch_coverage=1 00:29:00.566 --rc genhtml_function_coverage=1 00:29:00.566 --rc genhtml_legend=1 00:29:00.566 --rc geninfo_all_blocks=1 00:29:00.566 --rc geninfo_unexecuted_blocks=1 00:29:00.566 00:29:00.566 ' 00:29:00.566 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.566 --rc genhtml_branch_coverage=1 00:29:00.566 --rc genhtml_function_coverage=1 00:29:00.566 --rc genhtml_legend=1 00:29:00.566 --rc geninfo_all_blocks=1 00:29:00.566 --rc geninfo_unexecuted_blocks=1 00:29:00.566 00:29:00.566 ' 00:29:00.566 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.566 --rc genhtml_branch_coverage=1 00:29:00.566 --rc genhtml_function_coverage=1 00:29:00.566 --rc genhtml_legend=1 00:29:00.566 --rc geninfo_all_blocks=1 00:29:00.566 --rc geninfo_unexecuted_blocks=1 00:29:00.566 00:29:00.566 ' 00:29:00.566 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.566 --rc genhtml_branch_coverage=1 00:29:00.566 --rc genhtml_function_coverage=1 00:29:00.566 --rc genhtml_legend=1 00:29:00.566 --rc geninfo_all_blocks=1 00:29:00.566 --rc geninfo_unexecuted_blocks=1 00:29:00.566 00:29:00.566 ' 00:29:00.566 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:00.566 01:58:09 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:29:00.566 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:00.566 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:00.566 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:00.566 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:00.566 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62556 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62556 00:29:00.567 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 62556 ']' 00:29:00.567 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.567 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:00.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.567 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.567 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:00.567 01:58:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:00.567 01:58:09 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:00.567 [2024-10-15 01:58:09.460596] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:29:00.567 [2024-10-15 01:58:09.460757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62556 ] 00:29:00.825 [2024-10-15 01:58:09.633399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.084 [2024-10-15 01:58:09.906903] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.020 01:58:10 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.020 01:58:10 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:29:02.020 01:58:10 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:29:02.020 01:58:10 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:29:02.020 01:58:10 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:02.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:02.536 Waiting for block devices as requested 00:29:02.536 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:02.536 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:02.794 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:02.794 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:08.061 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:08.061 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:08.061 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:29:08.062 BYT; 00:29:08.062 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:29:08.062 BYT; 00:29:08.062 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:08.062 01:58:16 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:08.062 01:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:29:08.996 The operation has completed successfully. 00:29:08.996 01:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:29:09.944 The operation has completed successfully. 00:29:09.944 01:58:18 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:10.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:11.077 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:29:11.077 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:11.077 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:11.336 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:29:11.336 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:29:11.336 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.336 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:11.336 [] 00:29:11.336 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.336 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:29:11.336 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:29:11.336 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:29:11.336 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:11.336 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:29:11.336 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.336 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:11.336 [2024-10-15 01:58:20.312114] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035017da0 was disconnected and freed. delete nvme_qpair. 00:29:11.594 [2024-10-15 01:58:20.380286] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035000720 was disconnected and freed. delete nvme_qpair. 00:29:11.594 [2024-10-15 01:58:20.451852] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001bc097a0 was disconnected and freed. delete nvme_qpair. 00:29:11.594 [2024-10-15 01:58:20.517463] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x20001c72a920 was disconnected and freed. delete nvme_qpair. 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.594 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.594 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:29:11.594 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.594 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.594 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.594 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:29:11.594 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:29:11.594 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.594 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:11.854 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.854 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:29:11.854 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:29:11.855 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a878e0ec-919d-41c0-b280-87da9a352e6c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a878e0ec-919d-41c0-b280-87da9a352e6c",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "bde1854f-7b69-4359-adf1-92ffda77c158"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bde1854f-7b69-4359-adf1-92ffda77c158",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "798e03ca-ceae-4604-8932-211f434b7dce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "798e03ca-ceae-4604-8932-211f434b7dce",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "aa5616c8-1ff1-41f9-8487-812f27287ea4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aa5616c8-1ff1-41f9-8487-812f27287ea4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "754a8965-6d46-473f-ae5c-2ec765b8327b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "754a8965-6d46-473f-ae5c-2ec765b8327b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:11.855 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:29:11.855 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:29:11.855 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:29:11.855 01:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62556 00:29:11.855 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 62556 ']' 00:29:11.855 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 62556 00:29:11.855 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:29:11.855 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.855 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62556 00:29:11.855 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:11.855 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:11.855 killing process with pid 62556 00:29:11.855 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62556' 00:29:11.855 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 62556 00:29:11.855 01:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 62556 00:29:14.420 01:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:14.420 01:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:14.420 01:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:14.420 01:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:14.420 01:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:14.420 ************************************ 00:29:14.420 START TEST bdev_hello_world 00:29:14.420 ************************************ 00:29:14.420 01:58:23 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:14.420 [2024-10-15 01:58:23.416967] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:29:14.420 [2024-10-15 01:58:23.417126] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63197 ] 00:29:14.678 [2024-10-15 01:58:23.592547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.937 [2024-10-15 01:58:23.866529] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.507 [2024-10-15 01:58:24.343110] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:29:15.507 [2024-10-15 01:58:24.411241] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:29:15.507 [2024-10-15 01:58:24.483534] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001993a8a0 was disconnected and freed. delete nvme_qpair. 00:29:15.770 [2024-10-15 01:58:24.549142] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200003e3b920 was disconnected and freed. delete nvme_qpair. 00:29:15.770 [2024-10-15 01:58:24.567534] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:15.770 [2024-10-15 01:58:24.567583] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:15.770 [2024-10-15 01:58:24.567624] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:15.770 [2024-10-15 01:58:24.570943] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:15.770 [2024-10-15 01:58:24.571491] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:15.770 [2024-10-15 01:58:24.571546] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:15.770 [2024-10-15 01:58:24.571810] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:15.770 00:29:15.770 [2024-10-15 01:58:24.571869] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:15.770 [2024-10-15 01:58:24.573491] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200003e3dda0 was disconnected and freed. delete nvme_qpair. 00:29:17.145 00:29:17.145 real 0m2.551s 00:29:17.145 user 0m2.120s 00:29:17.145 sys 0m0.316s 00:29:17.145 01:58:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.145 01:58:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:17.145 ************************************ 00:29:17.145 END TEST bdev_hello_world 00:29:17.145 ************************************ 00:29:17.145 01:58:25 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:29:17.145 01:58:25 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:17.146 01:58:25 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:17.146 01:58:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:17.146 ************************************ 00:29:17.146 START TEST bdev_bounds 00:29:17.146 ************************************ 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63245 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:17.146 Process bdevio pid: 63245 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63245' 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63245 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 63245 ']' 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:17.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:17.146 01:58:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:17.146 [2024-10-15 01:58:26.038008] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:29:17.146 [2024-10-15 01:58:26.038226] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63245 ] 00:29:17.405 [2024-10-15 01:58:26.216876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:17.664 [2024-10-15 01:58:26.472073] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.664 [2024-10-15 01:58:26.472132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.664 [2024-10-15 01:58:26.472137] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.923 [2024-10-15 01:58:26.912181] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:29:18.182 [2024-10-15 01:58:26.981140] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:29:18.182 [2024-10-15 01:58:27.053415] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001993a8a0 was disconnected and freed. delete nvme_qpair. 00:29:18.182 [2024-10-15 01:58:27.131264] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x20000083b920 was disconnected and freed. delete nvme_qpair. 00:29:18.182 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:18.182 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:29:18.182 01:58:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:18.440 I/O targets: 00:29:18.440 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:29:18.440 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:29:18.440 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:29:18.440 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:18.440 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:18.440 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:18.440 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:29:18.440 00:29:18.440 00:29:18.440 CUnit - A unit testing framework for C - Version 2.1-3 00:29:18.440 http://cunit.sourceforge.net/ 00:29:18.440 00:29:18.440 00:29:18.440 Suite: bdevio tests on: Nvme3n1 00:29:18.440 Test: blockdev write read block ...passed 00:29:18.440 Test: blockdev write zeroes read block ...passed 00:29:18.440 Test: blockdev write zeroes read no split ...passed 00:29:18.440 Test: blockdev write zeroes read split ...passed 00:29:18.440 Test: blockdev write zeroes read split partial ...passed 00:29:18.440 Test: blockdev reset ...[2024-10-15 01:58:27.376886] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:29:18.440 passed 00:29:18.440 Test: blockdev write read 8 blocks ...[2024-10-15 01:58:27.380688] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0] Resetting controller successful. 00:29:18.440 passed 00:29:18.440 Test: blockdev write read size > 128k ...passed 00:29:18.440 Test: blockdev write read invalid size ...passed 00:29:18.440 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.440 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.440 Test: blockdev write read max offset ...passed 00:29:18.440 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.440 Test: blockdev writev readv 8 blocks ...passed 00:29:18.440 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.440 Test: blockdev writev readv block ...passed 00:29:18.440 Test: blockdev writev readv size > 128k ...passed 00:29:18.440 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.440 Test: blockdev comparev and writev ...[2024-10-15 01:58:27.388966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be806000 len:0x1000 00:29:18.440 [2024-10-15 01:58:27.389033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:18.440 passed 00:29:18.440 Test: blockdev nvme passthru rw ...passed 00:29:18.440 Test: blockdev nvme passthru vendor specific ...[2024-10-15 01:58:27.389792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:18.440 passed 00:29:18.440 Test: blockdev nvme admin passthru ...[2024-10-15 01:58:27.389835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:18.440 passed 00:29:18.440 Test: blockdev copy ...passed 00:29:18.440 Suite: bdevio tests on: Nvme2n3 00:29:18.440 Test: blockdev write read block ...passed 00:29:18.440 Test: blockdev write zeroes read block ...passed 00:29:18.440 Test: blockdev write zeroes read no split ...passed 00:29:18.440 Test: blockdev write zeroes read split ...passed 00:29:18.699 Test: blockdev write zeroes read split partial ...passed 00:29:18.699 Test: blockdev reset ...[2024-10-15 01:58:27.454985] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:29:18.699 [2024-10-15 01:58:27.459189] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0] Resetting controller successful. 00:29:18.699 passed 00:29:18.699 Test: blockdev write read 8 blocks ...passed 00:29:18.699 Test: blockdev write read size > 128k ...passed 00:29:18.699 Test: blockdev write read invalid size ...passed 00:29:18.699 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.699 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.699 Test: blockdev write read max offset ...passed 00:29:18.699 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.699 Test: blockdev writev readv 8 blocks ...passed 00:29:18.699 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.699 Test: blockdev writev readv block ...passed 00:29:18.699 Test: blockdev writev readv size > 128k ...passed 00:29:18.699 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.699 Test: blockdev comparev and writev ...[2024-10-15 01:58:27.468034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d683c000 len:0x1000 00:29:18.699 [2024-10-15 01:58:27.468095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:18.699 passed 00:29:18.699 Test: blockdev nvme passthru rw ...passed 00:29:18.699 Test: blockdev nvme passthru vendor specific ...[2024-10-15 01:58:27.468944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:29:18.699 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:29:18.699 [2024-10-15 01:58:27.469127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:18.699 passed 00:29:18.699 Test: blockdev copy ...passed 00:29:18.699 Suite: bdevio tests on: Nvme2n2 00:29:18.699 Test: blockdev write read block ...passed 00:29:18.699 Test: blockdev write zeroes read block ...passed 00:29:18.699 Test: blockdev write zeroes read no split ...passed 00:29:18.699 Test: blockdev write zeroes read split ...passed 00:29:18.699 Test: blockdev write zeroes read split partial ...passed 00:29:18.700 Test: blockdev reset ...[2024-10-15 01:58:27.533399] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:29:18.700 passed 00:29:18.700 Test: blockdev write read 8 blocks ...[2024-10-15 01:58:27.537691] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0] Resetting controller successful. 00:29:18.700 passed 00:29:18.700 Test: blockdev write read size > 128k ...passed 00:29:18.700 Test: blockdev write read invalid size ...passed 00:29:18.700 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.700 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.700 Test: blockdev write read max offset ...passed 00:29:18.700 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.700 Test: blockdev writev readv 8 blocks ...passed 00:29:18.700 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.700 Test: blockdev writev readv block ...passed 00:29:18.700 Test: blockdev writev readv size > 128k ...passed 00:29:18.700 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.700 Test: blockdev comparev and writev ...[2024-10-15 01:58:27.545432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6836000 len:0x1000 00:29:18.700 [2024-10-15 01:58:27.545491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:18.700 passed 00:29:18.700 Test: blockdev nvme passthru rw ...passed 00:29:18.700 Test: blockdev nvme passthru vendor specific ...passed 00:29:18.700 Test: blockdev nvme admin passthru ...[2024-10-15 01:58:27.546292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:18.700 [2024-10-15 01:58:27.546340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:18.700 passed 00:29:18.700 Test: blockdev copy ...passed 00:29:18.700 Suite: bdevio tests on: Nvme2n1 00:29:18.700 Test: blockdev write read block ...passed 00:29:18.700 Test: blockdev write zeroes read block ...passed 00:29:18.700 Test: blockdev write zeroes read no split ...passed 00:29:18.700 Test: blockdev write zeroes read split ...passed 00:29:18.700 Test: blockdev write zeroes read split partial ...passed 00:29:18.700 Test: blockdev reset ...[2024-10-15 01:58:27.610524] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:29:18.700 [2024-10-15 01:58:27.614845] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0] Resetting controller sucpassed 00:29:18.700 Test: blockdev write read 8 blocks ...cessful. 00:29:18.700 passed 00:29:18.700 Test: blockdev write read size > 128k ...passed 00:29:18.700 Test: blockdev write read invalid size ...passed 00:29:18.700 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.700 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.700 Test: blockdev write read max offset ...passed 00:29:18.700 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.700 Test: blockdev writev readv 8 blocks ...passed 00:29:18.700 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.700 Test: blockdev writev readv block ...passed 00:29:18.700 Test: blockdev writev readv size > 128k ...passed 00:29:18.700 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.700 Test: blockdev comparev and writev ...[2024-10-15 01:58:27.623475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6832000 len:0x1000 00:29:18.700 [2024-10-15 01:58:27.623540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:18.700 passed 00:29:18.700 Test: blockdev nvme passthru rw ...passed 00:29:18.700 Test: blockdev nvme passthru vendor specific ...passed 00:29:18.700 Test: blockdev nvme admin passthru ...[2024-10-15 01:58:27.624243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:18.700 [2024-10-15 01:58:27.624293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:18.700 passed 00:29:18.700 Test: blockdev copy ...passed 00:29:18.700 Suite: bdevio tests on: Nvme1n1p2 00:29:18.700 Test: blockdev write read block ...passed 00:29:18.700 Test: blockdev write zeroes read block ...passed 00:29:18.700 Test: blockdev write zeroes read no split ...passed 00:29:18.700 Test: blockdev write zeroes read split ...passed 00:29:18.700 Test: blockdev write zeroes read split partial ...passed 00:29:18.700 Test: blockdev reset ...[2024-10-15 01:58:27.689998] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:29:18.700 [2024-10-15 01:58:27.693773] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0] Resetting controller successful. 00:29:18.700 passed 00:29:18.700 Test: blockdev write read 8 blocks ...passed 00:29:18.700 Test: blockdev write read size > 128k ...passed 00:29:18.700 Test: blockdev write read invalid size ...passed 00:29:18.700 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.700 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.700 Test: blockdev write read max offset ...passed 00:29:18.700 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.700 Test: blockdev writev readv 8 blocks ...passed 00:29:18.700 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.700 Test: blockdev writev readv block ...passed 00:29:18.700 Test: blockdev writev readv size > 128k ...passed 00:29:18.700 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.700 Test: blockdev comparev and writev ...[2024-10-15 01:58:27.702278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d682e000 len:0x1000 00:29:18.700 [2024-10-15 01:58:27.702336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:18.700 passed 00:29:18.700 Test: blockdev nvme passthru rw ...passed 00:29:18.700 Test: blockdev nvme passthru vendor specific ...passed 00:29:18.700 Test: blockdev nvme admin passthru ...passed 00:29:18.700 Test: blockdev copy ...passed 00:29:18.700 Suite: bdevio tests on: Nvme1n1p1 00:29:18.700 Test: blockdev write read block ...passed 00:29:18.700 Test: blockdev write zeroes read block ...passed 00:29:18.700 Test: blockdev write zeroes read no split ...passed 00:29:18.959 Test: blockdev write zeroes read split ...passed 00:29:18.959 Test: blockdev write zeroes read split partial ...passed 00:29:18.959 Test: blockdev reset ...[2024-10-15 01:58:27.757561] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:29:18.959 passed 00:29:18.959 Test: blockdev write read 8 blocks ...[2024-10-15 01:58:27.761303] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0] Resetting controller successful. 00:29:18.959 passed 00:29:18.959 Test: blockdev write read size > 128k ...passed 00:29:18.959 Test: blockdev write read invalid size ...passed 00:29:18.959 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.959 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.959 Test: blockdev write read max offset ...passed 00:29:18.959 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.959 Test: blockdev writev readv 8 blocks ...passed 00:29:18.959 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.959 Test: blockdev writev readv block ...passed 00:29:18.959 Test: blockdev writev readv size > 128k ...passed 00:29:18.959 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.959 Test: blockdev comparev and writev ...[2024-10-15 01:58:27.769593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c4a0e000 len:0x1000 00:29:18.959 [2024-10-15 01:58:27.769648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:18.959 passed 00:29:18.959 Test: blockdev nvme passthru rw ...passed 00:29:18.959 Test: blockdev nvme passthru vendor specific ...passed 00:29:18.959 Test: blockdev nvme admin passthru ...passed 00:29:18.959 Test: blockdev copy ...passed 00:29:18.959 Suite: bdevio tests on: Nvme0n1 00:29:18.959 Test: blockdev write read block ...passed 00:29:18.959 Test: blockdev write zeroes read block ...passed 00:29:18.959 Test: blockdev write zeroes read no split ...passed 00:29:18.959 Test: blockdev write zeroes read split ...passed 00:29:18.959 Test: blockdev write zeroes read split partial ...passed 00:29:18.959 Test: blockdev reset ...[2024-10-15 01:58:27.825285] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:29:18.959 [2024-10-15 01:58:27.829041] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0] Resetting controller successful. 00:29:18.959 passed 00:29:18.959 Test: blockdev write read 8 blocks ...passed 00:29:18.959 Test: blockdev write read size > 128k ...passed 00:29:18.959 Test: blockdev write read invalid size ...passed 00:29:18.959 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.959 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.959 Test: blockdev write read max offset ...passed 00:29:18.959 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.959 Test: blockdev writev readv 8 blocks ...passed 00:29:18.959 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.959 Test: blockdev writev readv block ...passed 00:29:18.959 Test: blockdev writev readv size > 128k ...passed 00:29:18.959 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.959 Test: blockdev comparev and writev ...passed 00:29:18.959 Test: blockdev nvme passthru rw ...[2024-10-15 01:58:27.837766] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:29:18.959 separate metadata which is not supported yet. 00:29:18.959 passed 00:29:18.959 Test: blockdev nvme passthru vendor specific ...passed 00:29:18.959 Test: blockdev nvme admin passthru ...[2024-10-15 01:58:27.838298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:29:18.959 [2024-10-15 01:58:27.838356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:29:18.959 passed 00:29:18.959 Test: blockdev copy ...passed 00:29:18.959 00:29:18.959 Run Summary: Type Total Ran Passed Failed Inactive 00:29:18.959 suites 7 7 n/a 0 0 00:29:18.959 tests 161 161 161 0 0 00:29:18.959 asserts 1025 1025 1025 0 n/a 00:29:18.959 00:29:18.959 Elapsed time = 1.415 seconds 00:29:18.959 [2024-10-15 01:58:27.848816] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x20000080bda0 was disconnected and freed. delete nvme_qpair. 00:29:18.959 [2024-10-15 01:58:27.849967] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200019905da0 was disconnected and freed. delete nvme_qpair. 00:29:18.959 0 00:29:18.959 [2024-10-15 01:58:27.851618] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019a0bda0 was disconnected and freed. delete nvme_qpair. 00:29:18.959 [2024-10-15 01:58:27.852786] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200000809da0 was disconnected and freed. delete nvme_qpair. 00:29:18.959 01:58:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63245 00:29:18.959 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 63245 ']' 00:29:18.959 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 63245 00:29:18.959 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:29:18.959 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:18.959 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63245 00:29:18.959 killing process with pid 63245 00:29:18.959 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:18.959 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:18.959 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63245' 00:29:18.959 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 63245 00:29:18.959 01:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 63245 00:29:20.334 01:58:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:20.334 00:29:20.334 real 0m3.064s 00:29:20.334 user 0m7.424s 00:29:20.334 sys 0m0.464s 00:29:20.334 01:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.334 ************************************ 00:29:20.334 END TEST bdev_bounds 00:29:20.334 ************************************ 00:29:20.334 01:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:20.334 01:58:29 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:20.334 01:58:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:20.334 01:58:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.334 01:58:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:20.334 ************************************ 00:29:20.334 START TEST bdev_nbd 00:29:20.334 ************************************ 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63310 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63310 /var/tmp/spdk-nbd.sock 00:29:20.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 63310 ']' 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.334 01:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:20.334 [2024-10-15 01:58:29.166242] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:29:20.334 [2024-10-15 01:58:29.166447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.593 [2024-10-15 01:58:29.347925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.852 [2024-10-15 01:58:29.617218] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.112 [2024-10-15 01:58:30.061774] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:29:21.371 [2024-10-15 01:58:30.130130] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:29:21.371 [2024-10-15 01:58:30.203632] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001993a8a0 was disconnected and freed. delete nvme_qpair. 00:29:21.371 [2024-10-15 01:58:30.269968] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200003e3b920 was disconnected and freed. delete nvme_qpair. 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:21.371 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:21.706 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:21.706 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:21.706 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:21.707 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:21.707 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:21.707 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:21.707 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:21.707 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:21.984 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:21.984 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:21.984 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:21.984 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:21.984 1+0 records in 00:29:21.984 1+0 records out 00:29:21.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004842 s, 8.5 MB/s 00:29:21.985 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.985 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:21.985 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.985 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:21.985 01:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:21.985 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:21.985 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:21.985 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:29:22.243 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:22.243 01:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:22.243 1+0 records in 00:29:22.243 1+0 records out 00:29:22.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582535 s, 7.0 MB/s 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:22.243 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:22.502 1+0 records in 00:29:22.502 1+0 records out 00:29:22.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571811 s, 7.2 MB/s 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:22.502 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:29:22.760 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:22.761 1+0 records in 00:29:22.761 1+0 records out 00:29:22.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569472 s, 7.2 MB/s 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:22.761 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.019 1+0 records in 00:29:23.019 1+0 records out 00:29:23.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799645 s, 5.1 MB/s 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:23.019 01:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:23.020 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:23.020 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:23.020 01:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.278 1+0 records in 00:29:23.278 1+0 records out 00:29:23.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620381 s, 6.6 MB/s 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:23.278 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.844 1+0 records in 00:29:23.844 1+0 records out 00:29:23.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785214 s, 5.2 MB/s 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:23.844 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:24.102 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd0", 00:29:24.102 "bdev_name": "Nvme0n1" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd1", 00:29:24.102 "bdev_name": "Nvme1n1p1" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd2", 00:29:24.102 "bdev_name": "Nvme1n1p2" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd3", 00:29:24.102 "bdev_name": "Nvme2n1" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd4", 00:29:24.102 "bdev_name": "Nvme2n2" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd5", 00:29:24.102 "bdev_name": "Nvme2n3" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd6", 00:29:24.102 "bdev_name": "Nvme3n1" 00:29:24.102 } 00:29:24.102 ]' 00:29:24.102 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:24.102 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:24.102 01:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd0", 00:29:24.102 "bdev_name": "Nvme0n1" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd1", 00:29:24.102 "bdev_name": "Nvme1n1p1" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd2", 00:29:24.102 "bdev_name": "Nvme1n1p2" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd3", 00:29:24.102 "bdev_name": "Nvme2n1" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd4", 00:29:24.102 "bdev_name": "Nvme2n2" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd5", 00:29:24.102 "bdev_name": "Nvme2n3" 00:29:24.102 }, 00:29:24.102 { 00:29:24.102 "nbd_device": "/dev/nbd6", 00:29:24.102 "bdev_name": "Nvme3n1" 00:29:24.102 } 00:29:24.102 ]' 00:29:24.102 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:29:24.102 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:24.102 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:29:24.102 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:24.102 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:24.102 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.102 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:24.360 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:24.360 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:24.360 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:24.360 [2024-10-15 01:58:33.269755] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200003e3dda0 was disconnected and freed. delete nvme_qpair. 00:29:24.360 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.360 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.360 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:24.360 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:24.360 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.360 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.360 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:24.619 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:24.619 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:24.619 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:24.619 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.619 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.619 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:24.619 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:24.619 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.619 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.619 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:29:24.877 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:29:24.877 [2024-10-15 01:58:33.795967] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x2000002d4ca0 was disconnected and freed. delete nvme_qpair. 00:29:24.877 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:29:24.877 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:29:24.877 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.877 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.877 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:29:24.877 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:24.877 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.877 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.877 01:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:29:25.136 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:29:25.406 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:29:25.406 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:29:25.406 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:25.406 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:25.406 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:29:25.406 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:25.406 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:25.406 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:25.406 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:29:25.689 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:29:25.689 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:29:25.689 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:29:25.689 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:25.689 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:25.689 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:29:25.689 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:25.689 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:25.689 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:25.689 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:29:25.947 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:29:25.947 [2024-10-15 01:58:34.806634] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200019fffda0 was disconnected and freed. delete nvme_qpair. 00:29:25.947 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:29:25.947 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:29:25.947 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:25.947 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:25.947 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:29:25.947 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:25.947 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:25.947 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:25.947 01:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:29:26.205 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:29:26.205 [2024-10-15 01:58:35.134359] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019f3eda0 was disconnected and freed. delete nvme_qpair. 00:29:26.205 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:29:26.205 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:29:26.205 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:26.205 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:26.205 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:29:26.205 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:26.205 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:26.205 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:26.205 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.205 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:26.464 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:26.464 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:26.464 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:26.722 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:26.979 /dev/nbd0 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:26.979 1+0 records in 00:29:26.979 1+0 records out 00:29:26.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707927 s, 5.8 MB/s 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:26.979 01:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:29:27.237 /dev/nbd1 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:27.237 1+0 records in 00:29:27.237 1+0 records out 00:29:27.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642853 s, 6.4 MB/s 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:27.237 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:29:27.804 /dev/nbd10 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:27.804 1+0 records in 00:29:27.804 1+0 records out 00:29:27.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00388945 s, 1.1 MB/s 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:27.804 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:29:28.062 /dev/nbd11 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:28.062 1+0 records in 00:29:28.062 1+0 records out 00:29:28.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0006968 s, 5.9 MB/s 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:28.062 01:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:29:28.321 /dev/nbd12 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:28.321 1+0 records in 00:29:28.321 1+0 records out 00:29:28.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071225 s, 5.8 MB/s 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:28.321 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:29:28.580 /dev/nbd13 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:28.580 1+0 records in 00:29:28.580 1+0 records out 00:29:28.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722642 s, 5.7 MB/s 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:28.580 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:29:28.838 /dev/nbd14 00:29:29.096 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:29:29.096 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:29:29.096 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:29:29.096 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:29.097 1+0 records in 00:29:29.097 1+0 records out 00:29:29.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000901335 s, 4.5 MB/s 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:29.097 01:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd0", 00:29:29.356 "bdev_name": "Nvme0n1" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd1", 00:29:29.356 "bdev_name": "Nvme1n1p1" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd10", 00:29:29.356 "bdev_name": "Nvme1n1p2" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd11", 00:29:29.356 "bdev_name": "Nvme2n1" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd12", 00:29:29.356 "bdev_name": "Nvme2n2" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd13", 00:29:29.356 "bdev_name": "Nvme2n3" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd14", 00:29:29.356 "bdev_name": "Nvme3n1" 00:29:29.356 } 00:29:29.356 ]' 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd0", 00:29:29.356 "bdev_name": "Nvme0n1" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd1", 00:29:29.356 "bdev_name": "Nvme1n1p1" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd10", 00:29:29.356 "bdev_name": "Nvme1n1p2" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd11", 00:29:29.356 "bdev_name": "Nvme2n1" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd12", 00:29:29.356 "bdev_name": "Nvme2n2" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd13", 00:29:29.356 "bdev_name": "Nvme2n3" 00:29:29.356 }, 00:29:29.356 { 00:29:29.356 "nbd_device": "/dev/nbd14", 00:29:29.356 "bdev_name": "Nvme3n1" 00:29:29.356 } 00:29:29.356 ]' 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:29.356 /dev/nbd1 00:29:29.356 /dev/nbd10 00:29:29.356 /dev/nbd11 00:29:29.356 /dev/nbd12 00:29:29.356 /dev/nbd13 00:29:29.356 /dev/nbd14' 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:29.356 /dev/nbd1 00:29:29.356 /dev/nbd10 00:29:29.356 /dev/nbd11 00:29:29.356 /dev/nbd12 00:29:29.356 /dev/nbd13 00:29:29.356 /dev/nbd14' 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:29.356 256+0 records in 00:29:29.356 256+0 records out 00:29:29.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00841297 s, 125 MB/s 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:29.356 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:29.615 256+0 records in 00:29:29.615 256+0 records out 00:29:29.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172772 s, 6.1 MB/s 00:29:29.615 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:29.615 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:29.615 256+0 records in 00:29:29.615 256+0 records out 00:29:29.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160328 s, 6.5 MB/s 00:29:29.615 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:29.615 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:29:29.874 256+0 records in 00:29:29.874 256+0 records out 00:29:29.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168686 s, 6.2 MB/s 00:29:29.874 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:29.874 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:29:29.874 256+0 records in 00:29:29.874 256+0 records out 00:29:29.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165782 s, 6.3 MB/s 00:29:29.874 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:29.874 01:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:29:30.132 256+0 records in 00:29:30.132 256+0 records out 00:29:30.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160321 s, 6.5 MB/s 00:29:30.132 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:30.132 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:29:30.391 256+0 records in 00:29:30.391 256+0 records out 00:29:30.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164106 s, 6.4 MB/s 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:29:30.391 256+0 records in 00:29:30.391 256+0 records out 00:29:30.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146384 s, 7.2 MB/s 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:30.391 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:30.649 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:30.908 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:30.908 [2024-10-15 01:58:39.734423] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200003e3dda0 was disconnected and freed. delete nvme_qpair. 00:29:30.908 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:30.908 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:30.908 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:30.908 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:30.908 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:30.908 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:30.908 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:30.908 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:30.908 01:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:31.166 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:31.166 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:31.166 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:31.166 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.166 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.166 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:31.166 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:31.166 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.166 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.166 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:29:31.424 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:29:31.424 [2024-10-15 01:58:40.285050] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x2000002d4ca0 was disconnected and freed. delete nvme_qpair. 00:29:31.424 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:29:31.424 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:29:31.424 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.424 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.424 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:29:31.424 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:31.424 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.424 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.424 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:29:31.682 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:29:31.682 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:29:31.682 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:29:31.682 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.682 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.682 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:29:31.682 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:31.682 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.682 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.682 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:29:31.940 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:29:31.940 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:29:31.940 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:29:31.940 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.940 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.940 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:29:31.940 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:31.940 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.940 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.940 01:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:29:32.197 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:29:32.197 [2024-10-15 01:58:41.183750] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200019fffda0 was disconnected and freed. delete nvme_qpair. 00:29:32.197 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:29:32.197 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:29:32.197 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:32.197 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:32.197 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:29:32.197 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:32.197 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:32.197 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:32.197 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:29:32.763 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:29:32.763 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:29:32.763 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:29:32.763 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:32.763 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:32.763 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:29:32.763 [2024-10-15 01:58:41.502179] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019f3eda0 was disconnected and freed. delete nvme_qpair. 00:29:32.763 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:32.763 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:32.763 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:32.763 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:32.763 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:29:33.022 01:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:33.299 malloc_lvol_verify 00:29:33.299 01:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:33.557 c3a7534f-9923-4459-82e0-88e9ba569ce4 00:29:33.557 01:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:33.815 0938d8fb-43d9-42b0-967d-ca34acadf712 00:29:34.073 01:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:34.332 /dev/nbd0 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:29:34.332 mke2fs 1.47.0 (5-Feb-2023) 00:29:34.332 Discarding device blocks: 0/4096 done 00:29:34.332 Creating filesystem with 4096 1k blocks and 1024 inodes 00:29:34.332 00:29:34.332 Allocating group tables: 0/1 done 00:29:34.332 Writing inode tables: 0/1 done 00:29:34.332 Creating journal (1024 blocks): done 00:29:34.332 Writing superblocks and filesystem accounting information: 0/1 done 00:29:34.332 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:34.332 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63310 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 63310 ']' 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 63310 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63310 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:34.591 killing process with pid 63310 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63310' 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 63310 00:29:34.591 01:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 63310 00:29:35.967 01:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:29:35.967 00:29:35.967 real 0m15.822s 00:29:35.967 user 0m22.779s 00:29:35.967 sys 0m4.846s 00:29:35.967 01:58:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:35.967 01:58:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:35.967 ************************************ 00:29:35.967 END TEST bdev_nbd 00:29:35.967 ************************************ 00:29:35.967 01:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:29:35.967 01:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:29:35.967 01:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:29:35.967 skipping fio tests on NVMe due to multi-ns failures. 00:29:35.967 01:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:35.967 01:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:35.967 01:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:35.967 01:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:29:35.967 01:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:35.967 01:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:35.967 ************************************ 00:29:35.967 START TEST bdev_verify 00:29:35.967 ************************************ 00:29:35.967 01:58:44 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:36.226 [2024-10-15 01:58:45.036975] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:29:36.226 [2024-10-15 01:58:45.037225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63776 ] 00:29:36.226 [2024-10-15 01:58:45.218234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:36.794 [2024-10-15 01:58:45.511689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.794 [2024-10-15 01:58:45.511697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.052 [2024-10-15 01:58:45.973571] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:29:37.052 [2024-10-15 01:58:46.043631] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:29:37.311 [2024-10-15 01:58:46.117506] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001993a8a0 was disconnected and freed. delete nvme_qpair. 00:29:37.311 [2024-10-15 01:58:46.184371] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200003a3c920 was disconnected and freed. delete nvme_qpair. 00:29:37.311 Running I/O for 5 seconds... 00:29:39.620 20224.00 IOPS, 79.00 MiB/s [2024-10-15T01:58:49.568Z] 18464.00 IOPS, 72.12 MiB/s [2024-10-15T01:58:50.943Z] 18026.67 IOPS, 70.42 MiB/s [2024-10-15T01:58:51.511Z] 17968.00 IOPS, 70.19 MiB/s [2024-10-15T01:58:51.511Z] [2024-10-15 01:58:51.395018] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001ac00260 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.401367] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x2000002d49a0 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.402387] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019debc20 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.403278] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001b1ffe20 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.404721] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001b13eda0 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.409297] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001b3ffe20 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.410670] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019a009a0 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.411507] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001b33eda0 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.412491] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200013800920 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.419390] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x20001b5ffe20 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.420384] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200007000420 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.421984] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001a200a20 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.423436] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001a800320 was disconnected and freed. delete nvme_qpair. 00:29:42.499 [2024-10-15 01:58:51.424926] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x20001ac00520 was disconnected and freed. delete nvme_qpair. 00:29:42.499 17984.00 IOPS, 70.25 MiB/s 00:29:42.499 Latency(us) 00:29:42.499 [2024-10-15T01:58:51.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.499 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.499 Verification LBA range: start 0x0 length 0xbd0bd 00:29:42.499 Nvme0n1 : 5.10 1243.18 4.86 0.00 0.00 102347.35 12690.15 91035.46 00:29:42.499 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.499 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:29:42.499 Nvme0n1 : 5.10 1268.21 4.95 0.00 0.00 100336.49 16086.11 88175.71 00:29:42.499 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.499 Verification LBA range: start 0x0 length 0x4ff80 00:29:42.500 Nvme1n1p1 : 5.10 1242.15 4.85 0.00 0.00 102186.17 14656.23 82932.83 00:29:42.500 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.500 Verification LBA range: start 0x4ff80 length 0x4ff80 00:29:42.500 Nvme1n1p1 : 5.10 1267.47 4.95 0.00 0.00 100198.19 17039.36 85792.58 00:29:42.500 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.500 Verification LBA range: start 0x0 length 0x4ff7f 00:29:42.500 Nvme1n1p2 : 5.10 1241.65 4.85 0.00 0.00 101979.39 14120.03 81026.33 00:29:42.500 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.500 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:29:42.500 Nvme1n1p2 : 5.10 1266.83 4.95 0.00 0.00 100035.92 16681.89 83886.08 00:29:42.500 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.500 Verification LBA range: start 0x0 length 0x80000 00:29:42.500 Nvme2n1 : 5.12 1250.11 4.88 0.00 0.00 101483.69 11558.17 79119.83 00:29:42.500 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.500 Verification LBA range: start 0x80000 length 0x80000 00:29:42.500 Nvme2n1 : 5.12 1274.06 4.98 0.00 0.00 99679.72 15490.33 81026.33 00:29:42.500 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.500 Verification LBA range: start 0x0 length 0x80000 00:29:42.500 Nvme2n2 : 5.12 1249.77 4.88 0.00 0.00 101277.36 11736.90 81502.95 00:29:42.500 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.500 Verification LBA range: start 0x80000 length 0x80000 00:29:42.500 Nvme2n2 : 5.13 1273.16 4.97 0.00 0.00 99536.24 16562.73 84839.33 00:29:42.500 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.500 Verification LBA range: start 0x0 length 0x80000 00:29:42.500 Nvme2n3 : 5.12 1249.46 4.88 0.00 0.00 101077.91 10962.39 83886.08 00:29:42.500 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.500 Verification LBA range: start 0x80000 length 0x80000 00:29:42.500 Nvme2n3 : 5.13 1272.52 4.97 0.00 0.00 99381.41 16801.05 84839.33 00:29:42.500 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.500 Verification LBA range: start 0x0 length 0x20000 00:29:42.500 Nvme3n1 : 5.12 1249.07 4.88 0.00 0.00 100910.84 10009.13 85315.96 00:29:42.500 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.500 Verification LBA range: start 0x20000 length 0x20000 00:29:42.500 Nvme3n1 : 5.13 1272.03 4.97 0.00 0.00 99199.90 13881.72 88652.33 00:29:42.500 [2024-10-15T01:58:51.512Z] =================================================================================================================== 00:29:42.500 [2024-10-15T01:58:51.512Z] Total : 17619.67 68.83 0.00 0.00 100676.64 10009.13 91035.46 00:29:44.402 00:29:44.402 real 0m8.060s 00:29:44.402 user 0m14.488s 00:29:44.402 sys 0m0.380s 00:29:44.402 01:58:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:44.402 01:58:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:29:44.402 ************************************ 00:29:44.402 END TEST bdev_verify 00:29:44.402 ************************************ 00:29:44.402 01:58:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:44.402 01:58:53 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:29:44.402 01:58:53 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:44.402 01:58:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:44.402 ************************************ 00:29:44.402 START TEST bdev_verify_big_io 00:29:44.402 ************************************ 00:29:44.402 01:58:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:44.402 [2024-10-15 01:58:53.147454] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:29:44.402 [2024-10-15 01:58:53.147639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63880 ] 00:29:44.402 [2024-10-15 01:58:53.332951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:44.660 [2024-10-15 01:58:53.647748] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.660 [2024-10-15 01:58:53.647754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.227 [2024-10-15 01:58:54.100530] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:29:45.227 [2024-10-15 01:58:54.171346] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:29:45.486 [2024-10-15 01:58:54.245384] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001993a8a0 was disconnected and freed. delete nvme_qpair. 00:29:45.486 [2024-10-15 01:58:54.313540] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200003a3c920 was disconnected and freed. delete nvme_qpair. 00:29:45.745 Running I/O for 5 seconds... 00:29:50.224 976.00 IOPS, 61.00 MiB/s [2024-10-15T01:59:00.611Z] 1495.50 IOPS, 93.47 MiB/s [2024-10-15T01:59:00.611Z] [2024-10-15 01:59:00.463838] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019600260 was disconnected and freed. delete nvme_qpair. 00:29:51.599 [2024-10-15 01:59:00.465330] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001a1d72a0 was disconnected and freed. delete nvme_qpair. 00:29:51.599 [2024-10-15 01:59:00.500773] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001a1cf2a0 was disconnected and freed. delete nvme_qpair. 00:29:51.599 [2024-10-15 01:59:00.504601] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001b40c2a0 was disconnected and freed. delete nvme_qpair. 00:29:51.599 [2024-10-15 01:59:00.508573] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a2072a0 was disconnected and freed. delete nvme_qpair. 00:29:51.599 [2024-10-15 01:59:00.511184] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001a1d4260 was disconnected and freed. delete nvme_qpair. 00:29:51.599 [2024-10-15 01:59:00.517113] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200019deb2a0 was disconnected and freed. delete nvme_qpair. 00:29:51.599 2374.67 IOPS, 148.42 MiB/s [2024-10-15T01:59:00.611Z] [2024-10-15 01:59:00.565980] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019de31a0 was disconnected and freed. delete nvme_qpair. 00:29:51.599 [2024-10-15 01:59:00.590803] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019de81e0 was disconnected and freed. delete nvme_qpair. 00:29:51.858 [2024-10-15 01:59:00.615328] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019900160 was disconnected and freed. delete nvme_qpair. 00:29:51.858 [2024-10-15 01:59:00.624907] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200003aff160 was disconnected and freed. delete nvme_qpair. 00:29:51.858 [2024-10-15 01:59:00.630931] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200021fffe20 was disconnected and freed. delete nvme_qpair. 00:29:51.858 [2024-10-15 01:59:00.652076] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x200021f3eda0 was disconnected and freed. delete nvme_qpair. 00:29:51.858 00:29:51.858 Latency(us) 00:29:51.858 [2024-10-15T01:59:00.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.858 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x0 length 0xbd0b 00:29:51.858 Nvme0n1 : 5.94 107.68 6.73 0.00 0.00 1128147.87 12868.89 1243039.19 00:29:51.858 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0xbd0b length 0xbd0b 00:29:51.858 Nvme0n1 : 5.78 105.74 6.61 0.00 0.00 1157697.19 29669.93 1121023.07 00:29:51.858 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x0 length 0x4ff8 00:29:51.858 Nvme1n1p1 : 5.84 99.90 6.24 0.00 0.00 1183423.40 93895.21 1799737.72 00:29:51.858 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x4ff8 length 0x4ff8 00:29:51.858 Nvme1n1p1 : 5.87 109.23 6.83 0.00 0.00 1101423.06 85792.58 1143901.09 00:29:51.858 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x0 length 0x4ff7 00:29:51.858 Nvme1n1p2 : 5.95 111.12 6.95 0.00 0.00 1043960.81 107717.35 888429.85 00:29:51.858 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x4ff7 length 0x4ff7 00:29:51.858 Nvme1n1p2 : 5.78 107.71 6.73 0.00 0.00 1099037.09 86269.21 1502323.43 00:29:51.858 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x0 length 0x8000 00:29:51.858 Nvme2n1 : 6.03 110.87 6.93 0.00 0.00 1020409.43 52190.49 1631965.56 00:29:51.858 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x8000 length 0x8000 00:29:51.858 Nvme2n1 : 5.79 109.39 6.84 0.00 0.00 1050825.85 87222.46 1250665.19 00:29:51.858 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x0 length 0x8000 00:29:51.858 Nvme2n2 : 6.03 113.97 7.12 0.00 0.00 962300.20 19422.49 1914127.83 00:29:51.858 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x8000 length 0x8000 00:29:51.858 Nvme2n2 : 5.87 114.46 7.15 0.00 0.00 976033.89 79119.83 1288795.23 00:29:51.858 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x0 length 0x8000 00:29:51.858 Nvme2n3 : 6.07 119.40 7.46 0.00 0.00 884598.68 17396.83 1937005.85 00:29:51.858 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x8000 length 0x8000 00:29:51.858 Nvme2n3 : 5.95 123.77 7.74 0.00 0.00 881662.87 31933.91 1304047.24 00:29:51.858 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x0 length 0x2000 00:29:51.858 Nvme3n1 : 6.13 144.11 9.01 0.00 0.00 720130.91 1102.20 1967509.88 00:29:51.858 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.858 Verification LBA range: start 0x2000 length 0x2000 00:29:51.858 Nvme3n1 : 5.99 138.17 8.64 0.00 0.00 769159.76 6642.97 1326925.27 00:29:51.858 [2024-10-15T01:59:00.870Z] =================================================================================================================== 00:29:51.858 [2024-10-15T01:59:00.870Z] Total : 1615.53 100.97 0.00 0.00 982857.91 1102.20 1967509.88 00:29:52.116 [2024-10-15 01:59:01.034687] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x2000221ffe20 was disconnected and freed. delete nvme_qpair. 00:29:54.046 00:29:54.046 real 0m9.613s 00:29:54.047 user 0m17.497s 00:29:54.047 sys 0m0.419s 00:29:54.047 01:59:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:54.047 01:59:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:54.047 ************************************ 00:29:54.047 END TEST bdev_verify_big_io 00:29:54.047 ************************************ 00:29:54.047 01:59:02 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:54.047 01:59:02 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:29:54.047 01:59:02 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:54.047 01:59:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:54.047 ************************************ 00:29:54.047 START TEST bdev_write_zeroes 00:29:54.047 ************************************ 00:29:54.047 01:59:02 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:54.047 [2024-10-15 01:59:02.801628] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:29:54.047 [2024-10-15 01:59:02.801781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64005 ] 00:29:54.047 [2024-10-15 01:59:02.969650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.305 [2024-10-15 01:59:03.223516] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.872 [2024-10-15 01:59:03.669807] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001a108da0 was disconnected and freed. delete nvme_qpair. 00:29:54.872 [2024-10-15 01:59:03.740084] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200019d1a720 was disconnected and freed. delete nvme_qpair. 00:29:54.872 [2024-10-15 01:59:03.815379] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001993a8a0 was disconnected and freed. delete nvme_qpair. 00:29:54.872 [2024-10-15 01:59:03.883122] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x200003e3b920 was disconnected and freed. delete nvme_qpair. 00:29:55.130 Running I/O for 1 seconds... 00:29:56.064 49920.00 IOPS, 195.00 MiB/s 00:29:56.064 Latency(us) 00:29:56.064 [2024-10-15T01:59:05.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.064 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.064 Nvme0n1 : 1.04 6922.49 27.04 0.00 0.00 18435.69 7417.48 85315.96 00:29:56.064 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.064 Nvme1n1p1 : 1.04 7080.51 27.66 0.00 0.00 17990.33 11796.48 58386.62 00:29:56.064 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.064 Nvme1n1p2 : 1.04 7076.47 27.64 0.00 0.00 17939.59 12809.31 57195.05 00:29:56.064 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.064 Nvme2n1 : 1.04 7065.54 27.60 0.00 0.00 17896.27 11260.28 57433.37 00:29:56.064 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.064 Nvme2n2 : 1.04 7054.57 27.56 0.00 0.00 17895.59 11200.70 57433.37 00:29:56.064 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.064 Nvme2n3 : 1.04 7043.94 27.52 0.00 0.00 17890.05 11498.59 57433.37 00:29:56.064 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.064 Nvme3n1 : 1.05 6972.05 27.23 0.00 0.00 18026.10 12809.31 57671.68 00:29:56.064 [2024-10-15T01:59:05.076Z] =================================================================================================================== 00:29:56.064 [2024-10-15T01:59:05.076Z] Total : 49215.57 192.25 0.00 0.00 18008.97 7417.48 85315.96 00:29:56.064 [2024-10-15 01:59:05.017500] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019d214e0 was disconnected and freed. delete nvme_qpair. 00:29:56.064 [2024-10-15 01:59:05.019194] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20000053eda0 was disconnected and freed. delete nvme_qpair. 00:29:56.064 [2024-10-15 01:59:05.020805] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001a3ffe20 was disconnected and freed. delete nvme_qpair. 00:29:56.064 [2024-10-15 01:59:05.022307] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001a33eda0 was disconnected and freed. delete nvme_qpair. 00:29:56.064 [2024-10-15 01:59:05.031301] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x20001a5ffe20 was disconnected and freed. delete nvme_qpair. 00:29:56.064 [2024-10-15 01:59:05.043394] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x2000002d49a0 was disconnected and freed. delete nvme_qpair. 00:29:56.064 [2024-10-15 01:59:05.044959] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x2000005ffbe0 was disconnected and freed. delete nvme_qpair. 00:29:57.440 00:29:57.440 real 0m3.686s 00:29:57.440 user 0m3.237s 00:29:57.440 sys 0m0.325s 00:29:57.440 01:59:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:57.440 01:59:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:57.440 ************************************ 00:29:57.440 END TEST bdev_write_zeroes 00:29:57.440 ************************************ 00:29:57.440 01:59:06 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:57.440 01:59:06 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:29:57.440 01:59:06 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:57.440 01:59:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:57.440 ************************************ 00:29:57.440 START TEST bdev_json_nonenclosed 00:29:57.440 ************************************ 00:29:57.440 01:59:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:57.698 [2024-10-15 01:59:06.561723] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:29:57.698 [2024-10-15 01:59:06.561931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64063 ] 00:29:57.981 [2024-10-15 01:59:06.743902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.239 [2024-10-15 01:59:06.988434] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.239 [2024-10-15 01:59:06.988568] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:58.239 [2024-10-15 01:59:06.988598] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:58.239 [2024-10-15 01:59:06.988614] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:58.497 00:29:58.497 real 0m0.986s 00:29:58.497 user 0m0.705s 00:29:58.497 sys 0m0.173s 00:29:58.497 01:59:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:58.497 01:59:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:58.497 ************************************ 00:29:58.497 END TEST bdev_json_nonenclosed 00:29:58.497 ************************************ 00:29:58.497 01:59:07 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:58.497 01:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:29:58.497 01:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:58.497 01:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:58.497 ************************************ 00:29:58.497 START TEST bdev_json_nonarray 00:29:58.497 ************************************ 00:29:58.497 01:59:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:58.757 [2024-10-15 01:59:07.610921] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:29:58.757 [2024-10-15 01:59:07.611141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64094 ] 00:29:59.018 [2024-10-15 01:59:07.785046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.285 [2024-10-15 01:59:08.038989] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.285 [2024-10-15 01:59:08.039176] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:59.285 [2024-10-15 01:59:08.039207] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:59.285 [2024-10-15 01:59:08.039222] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:59.544 00:29:59.544 real 0m1.005s 00:29:59.544 user 0m0.729s 00:29:59.544 sys 0m0.168s 00:29:59.544 01:59:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:59.544 ************************************ 00:29:59.544 END TEST bdev_json_nonarray 00:29:59.544 01:59:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:59.544 ************************************ 00:29:59.544 01:59:08 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:29:59.544 01:59:08 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:29:59.544 01:59:08 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:29:59.544 01:59:08 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:59.544 01:59:08 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:59.544 01:59:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:59.544 ************************************ 00:29:59.544 START TEST bdev_gpt_uuid 00:29:59.544 ************************************ 00:29:59.544 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:29:59.544 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:29:59.544 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:29:59.544 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64131 00:29:59.544 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:59.544 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64131 00:29:59.544 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:59.544 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 64131 ']' 00:29:59.545 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.545 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:59.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.545 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.545 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:59.545 01:59:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:29:59.803 [2024-10-15 01:59:08.681958] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:29:59.803 [2024-10-15 01:59:08.682177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64131 ] 00:30:00.062 [2024-10-15 01:59:08.862048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.321 [2024-10-15 01:59:09.162497] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.255 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:01.255 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:30:01.255 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:01.255 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.255 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:01.255 [2024-10-15 01:59:10.206492] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035017da0 was disconnected and freed. delete nvme_qpair. 00:30:01.514 [2024-10-15 01:59:10.275589] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035000720 was disconnected and freed. delete nvme_qpair. 00:30:01.514 [2024-10-15 01:59:10.348334] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:12.0] qpair 0x20001bc097a0 was disconnected and freed. delete nvme_qpair. 00:30:01.514 [2024-10-15 01:59:10.415117] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:13.0] qpair 0x20001c72a920 was disconnected and freed. delete nvme_qpair. 00:30:01.514 Some configs were skipped because the RPC state that can call them passed over. 00:30:01.514 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.514 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:30:01.514 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.514 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:01.514 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.514 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:01.514 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.514 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:01.514 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.514 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:30:01.514 { 00:30:01.514 "name": "Nvme1n1p1", 00:30:01.514 "aliases": [ 00:30:01.514 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:01.514 ], 00:30:01.514 "product_name": "GPT Disk", 00:30:01.514 "block_size": 4096, 00:30:01.514 "num_blocks": 655104, 00:30:01.514 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:01.514 "assigned_rate_limits": { 00:30:01.514 "rw_ios_per_sec": 0, 00:30:01.514 "rw_mbytes_per_sec": 0, 00:30:01.514 "r_mbytes_per_sec": 0, 00:30:01.514 "w_mbytes_per_sec": 0 00:30:01.514 }, 00:30:01.514 "claimed": false, 00:30:01.514 "zoned": false, 00:30:01.514 "supported_io_types": { 00:30:01.514 "read": true, 00:30:01.514 "write": true, 00:30:01.514 "unmap": true, 00:30:01.514 "flush": true, 00:30:01.514 "reset": true, 00:30:01.514 "nvme_admin": false, 00:30:01.514 "nvme_io": false, 00:30:01.514 "nvme_io_md": false, 00:30:01.514 "write_zeroes": true, 00:30:01.514 "zcopy": false, 00:30:01.514 "get_zone_info": false, 00:30:01.514 "zone_management": false, 00:30:01.514 "zone_append": false, 00:30:01.514 "compare": true, 00:30:01.514 "compare_and_write": false, 00:30:01.514 "abort": true, 00:30:01.514 "seek_hole": false, 00:30:01.514 "seek_data": false, 00:30:01.514 "copy": true, 00:30:01.514 "nvme_iov_md": false 00:30:01.514 }, 00:30:01.514 "driver_specific": { 00:30:01.514 "gpt": { 00:30:01.514 "base_bdev": "Nvme1n1", 00:30:01.514 "offset_blocks": 256, 00:30:01.514 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:01.514 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:01.514 "partition_name": "SPDK_TEST_first" 00:30:01.514 } 00:30:01.514 } 00:30:01.514 } 00:30:01.514 ]' 00:30:01.514 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:30:01.773 { 00:30:01.773 "name": "Nvme1n1p2", 00:30:01.773 "aliases": [ 00:30:01.773 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:01.773 ], 00:30:01.773 "product_name": "GPT Disk", 00:30:01.773 "block_size": 4096, 00:30:01.773 "num_blocks": 655103, 00:30:01.773 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:01.773 "assigned_rate_limits": { 00:30:01.773 "rw_ios_per_sec": 0, 00:30:01.773 "rw_mbytes_per_sec": 0, 00:30:01.773 "r_mbytes_per_sec": 0, 00:30:01.773 "w_mbytes_per_sec": 0 00:30:01.773 }, 00:30:01.773 "claimed": false, 00:30:01.773 "zoned": false, 00:30:01.773 "supported_io_types": { 00:30:01.773 "read": true, 00:30:01.773 "write": true, 00:30:01.773 "unmap": true, 00:30:01.773 "flush": true, 00:30:01.773 "reset": true, 00:30:01.773 "nvme_admin": false, 00:30:01.773 "nvme_io": false, 00:30:01.773 "nvme_io_md": false, 00:30:01.773 "write_zeroes": true, 00:30:01.773 "zcopy": false, 00:30:01.773 "get_zone_info": false, 00:30:01.773 "zone_management": false, 00:30:01.773 "zone_append": false, 00:30:01.773 "compare": true, 00:30:01.773 "compare_and_write": false, 00:30:01.773 "abort": true, 00:30:01.773 "seek_hole": false, 00:30:01.773 "seek_data": false, 00:30:01.773 "copy": true, 00:30:01.773 "nvme_iov_md": false 00:30:01.773 }, 00:30:01.773 "driver_specific": { 00:30:01.773 "gpt": { 00:30:01.773 "base_bdev": "Nvme1n1", 00:30:01.773 "offset_blocks": 655360, 00:30:01.773 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:01.773 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:01.773 "partition_name": "SPDK_TEST_second" 00:30:01.773 } 00:30:01.773 } 00:30:01.773 } 00:30:01.773 ]' 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:01.773 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 64131 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 64131 ']' 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 64131 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64131 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:02.032 killing process with pid 64131 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64131' 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 64131 00:30:02.032 01:59:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 64131 00:30:04.590 00:30:04.590 real 0m4.778s 00:30:04.590 user 0m4.968s 00:30:04.590 sys 0m0.635s 00:30:04.590 01:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:04.590 01:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:04.590 ************************************ 00:30:04.590 END TEST bdev_gpt_uuid 00:30:04.590 ************************************ 00:30:04.590 01:59:13 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:30:04.590 01:59:13 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:30:04.590 01:59:13 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:30:04.590 01:59:13 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:04.591 01:59:13 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:04.591 01:59:13 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:30:04.591 01:59:13 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:30:04.591 01:59:13 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:30:04.591 01:59:13 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:04.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:05.107 Waiting for block devices as requested 00:30:05.107 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:05.107 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:05.107 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:05.365 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:10.629 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:10.629 01:59:19 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:30:10.629 01:59:19 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:30:10.629 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:10.629 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:10.629 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:10.629 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:10.629 01:59:19 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:30:10.629 00:30:10.629 real 1m10.374s 00:30:10.629 user 1m29.406s 00:30:10.629 sys 0m10.962s 00:30:10.629 01:59:19 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:10.629 01:59:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:10.629 ************************************ 00:30:10.629 END TEST blockdev_nvme_gpt 00:30:10.629 ************************************ 00:30:10.629 01:59:19 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:10.629 01:59:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:10.629 01:59:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:10.629 01:59:19 -- common/autotest_common.sh@10 -- # set +x 00:30:10.629 ************************************ 00:30:10.629 START TEST nvme 00:30:10.629 ************************************ 00:30:10.629 01:59:19 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:10.887 * Looking for test storage... 00:30:10.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:10.887 01:59:19 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:10.887 01:59:19 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:30:10.887 01:59:19 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:10.887 01:59:19 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:10.887 01:59:19 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.887 01:59:19 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.887 01:59:19 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.887 01:59:19 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.887 01:59:19 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.887 01:59:19 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.887 01:59:19 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.887 01:59:19 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.887 01:59:19 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.887 01:59:19 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.887 01:59:19 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.887 01:59:19 nvme -- scripts/common.sh@344 -- # case "$op" in 00:30:10.887 01:59:19 nvme -- scripts/common.sh@345 -- # : 1 00:30:10.887 01:59:19 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.887 01:59:19 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.887 01:59:19 nvme -- scripts/common.sh@365 -- # decimal 1 00:30:10.887 01:59:19 nvme -- scripts/common.sh@353 -- # local d=1 00:30:10.887 01:59:19 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.887 01:59:19 nvme -- scripts/common.sh@355 -- # echo 1 00:30:10.887 01:59:19 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.887 01:59:19 nvme -- scripts/common.sh@366 -- # decimal 2 00:30:10.887 01:59:19 nvme -- scripts/common.sh@353 -- # local d=2 00:30:10.887 01:59:19 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.887 01:59:19 nvme -- scripts/common.sh@355 -- # echo 2 00:30:10.887 01:59:19 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.887 01:59:19 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.887 01:59:19 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.887 01:59:19 nvme -- scripts/common.sh@368 -- # return 0 00:30:10.887 01:59:19 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.887 01:59:19 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:10.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.887 --rc genhtml_branch_coverage=1 00:30:10.887 --rc genhtml_function_coverage=1 00:30:10.887 --rc genhtml_legend=1 00:30:10.887 --rc geninfo_all_blocks=1 00:30:10.887 --rc geninfo_unexecuted_blocks=1 00:30:10.887 00:30:10.887 ' 00:30:10.887 01:59:19 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:10.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.887 --rc genhtml_branch_coverage=1 00:30:10.887 --rc genhtml_function_coverage=1 00:30:10.887 --rc genhtml_legend=1 00:30:10.887 --rc geninfo_all_blocks=1 00:30:10.887 --rc geninfo_unexecuted_blocks=1 00:30:10.887 00:30:10.887 ' 00:30:10.887 01:59:19 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:10.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.887 --rc genhtml_branch_coverage=1 00:30:10.887 --rc genhtml_function_coverage=1 00:30:10.887 --rc genhtml_legend=1 00:30:10.887 --rc geninfo_all_blocks=1 00:30:10.887 --rc geninfo_unexecuted_blocks=1 00:30:10.887 00:30:10.887 ' 00:30:10.887 01:59:19 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:10.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.887 --rc genhtml_branch_coverage=1 00:30:10.887 --rc genhtml_function_coverage=1 00:30:10.887 --rc genhtml_legend=1 00:30:10.887 --rc geninfo_all_blocks=1 00:30:10.887 --rc geninfo_unexecuted_blocks=1 00:30:10.887 00:30:10.887 ' 00:30:10.887 01:59:19 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:11.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:12.021 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:30:12.021 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:12.021 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:12.021 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:30:12.021 01:59:20 nvme -- nvme/nvme.sh@79 -- # uname 00:30:12.021 01:59:20 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:12.021 01:59:20 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:12.021 01:59:20 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:12.021 01:59:20 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:12.021 01:59:20 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:30:12.021 01:59:20 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:30:12.021 01:59:20 nvme -- common/autotest_common.sh@1071 -- # stubpid=64785 00:30:12.021 01:59:20 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:12.021 Waiting for stub to ready for secondary processes... 00:30:12.021 01:59:20 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:30:12.021 01:59:20 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:12.021 01:59:20 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64785 ]] 00:30:12.021 01:59:20 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:30:12.279 [2024-10-15 01:59:21.045264] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:30:12.279 [2024-10-15 01:59:21.045468] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:30:13.216 01:59:21 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:13.216 01:59:21 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64785 ]] 00:30:13.216 01:59:21 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:30:13.474 [2024-10-15 01:59:22.339315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:13.732 [2024-10-15 01:59:22.593695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.732 [2024-10-15 01:59:22.593811] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.732 [2024-10-15 01:59:22.593832] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.732 [2024-10-15 01:59:22.613046] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:30:13.732 [2024-10-15 01:59:22.613094] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:13.732 [2024-10-15 01:59:22.626011] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:13.732 [2024-10-15 01:59:22.626155] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:13.732 [2024-10-15 01:59:22.629141] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:13.732 [2024-10-15 01:59:22.629483] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:30:13.732 [2024-10-15 01:59:22.629586] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:30:13.732 [2024-10-15 01:59:22.632545] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:13.732 [2024-10-15 01:59:22.632801] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:30:13.732 [2024-10-15 01:59:22.632903] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:30:13.732 [2024-10-15 01:59:22.636321] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:13.732 [2024-10-15 01:59:22.636595] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:30:13.732 [2024-10-15 01:59:22.636666] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:30:13.732 [2024-10-15 01:59:22.636726] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:30:13.732 [2024-10-15 01:59:22.636775] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:30:13.991 01:59:22 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:13.991 done. 00:30:13.991 01:59:22 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:30:13.991 01:59:22 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:13.991 01:59:22 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:30:13.991 01:59:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:13.991 01:59:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:14.249 ************************************ 00:30:14.249 START TEST nvme_reset 00:30:14.249 ************************************ 00:30:14.249 01:59:23 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:14.508 Initializing NVMe Controllers 00:30:14.508 Skipping QEMU NVMe SSD at 0000:00:10.0 00:30:14.508 Skipping QEMU NVMe SSD at 0000:00:11.0 00:30:14.508 Skipping QEMU NVMe SSD at 0000:00:13.0 00:30:14.508 Skipping QEMU NVMe SSD at 0000:00:12.0 00:30:14.508 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:14.508 00:30:14.508 real 0m0.367s 00:30:14.508 user 0m0.141s 00:30:14.508 sys 0m0.164s 00:30:14.508 01:59:23 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:14.508 01:59:23 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:30:14.508 ************************************ 00:30:14.508 END TEST nvme_reset 00:30:14.508 ************************************ 00:30:14.508 01:59:23 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:14.508 01:59:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:14.508 01:59:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:14.508 01:59:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:14.508 ************************************ 00:30:14.508 START TEST nvme_identify 00:30:14.508 ************************************ 00:30:14.508 01:59:23 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:30:14.508 01:59:23 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:30:14.508 01:59:23 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:14.508 01:59:23 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:14.508 01:59:23 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:14.508 01:59:23 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:14.508 01:59:23 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:30:14.508 01:59:23 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:14.508 01:59:23 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:14.508 01:59:23 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:14.508 01:59:23 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:30:14.508 01:59:23 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:30:14.508 01:59:23 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:14.769 ===================================================== 00:30:14.769 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:14.769 ===================================================== 00:30:14.769 Controller Capabilities/Features 00:30:14.770 ================================ 00:30:14.770 Vendor ID: 1b36 00:30:14.770 Subsystem Vendor ID: 1af4 00:30:14.770 Serial Number: 12340 00:30:14.770 Model Number: QEMU NVMe Ctrl 00:30:14.770 Firmware Version: 8.0.0 00:30:14.770 Recommended Arb Burst: 6 00:30:14.770 IEEE OUI Identifier: 00 54 52 00:30:14.770 Multi-path I/O 00:30:14.770 May have multiple subsystem ports: No 00:30:14.770 May have multiple controllers: No 00:30:14.770 Associated with SR-IOV VF: No 00:30:14.770 Max Data Transfer Size: 524288 00:30:14.770 Max Number of Namespaces: 256 00:30:14.770 Max Number of I/O Queues: 64 00:30:14.770 NVMe Specification Version (VS): 1.4 00:30:14.770 NVMe Specification Version (Identify): 1.4 00:30:14.770 Maximum Queue Entries: 2048 00:30:14.770 Contiguous Queues Required: Yes 00:30:14.770 Arbitration Mechanisms Supported 00:30:14.770 Weighted Round Robin: Not Supported 00:30:14.770 Vendor Specific: Not Supported 00:30:14.770 Reset Timeout: 7500 ms 00:30:14.770 Doorbell Stride: 4 bytes 00:30:14.770 NVM Subsystem Reset: Not Supported 00:30:14.770 Command Sets Supported 00:30:14.770 NVM Command Set: Supported 00:30:14.770 Boot Partition: Not Supported 00:30:14.770 Memory Page Size Minimum: 4096 bytes 00:30:14.770 Memory Page Size Maximum: 65536 bytes 00:30:14.770 Persistent Memory Region: Not Supported 00:30:14.770 Optional Asynchronous Events Supported 00:30:14.770 Namespace Attribute Notices: Supported 00:30:14.770 Firmware Activation Notices: Not Supported 00:30:14.770 ANA Change Notices: Not Supported 00:30:14.770 PLE Aggregate Log Change Notices: Not Supported 00:30:14.770 LBA Status Info Alert Notices: Not Supported 00:30:14.770 EGE Aggregate Log Change Notices: Not Supported 00:30:14.770 Normal NVM Subsystem Shutdown event: Not Supported 00:30:14.770 Zone Descriptor Change Notices: Not Supported 00:30:14.770 Discovery Log Change Notices: Not Supported 00:30:14.770 Controller Attributes 00:30:14.770 128-bit Host Identifier: Not Supported 00:30:14.770 Non-Operational Permissive Mode: Not Supported 00:30:14.770 NVM Sets: Not Supported 00:30:14.770 Read Recovery Levels: Not Supported 00:30:14.770 Endurance Groups: Not Supported 00:30:14.770 Predictable Latency Mode: Not Supported 00:30:14.770 Traffic Based Keep ALive: Not Supported 00:30:14.770 Namespace Granularity: Not Supported 00:30:14.770 SQ Associations: Not Supported 00:30:14.770 UUID List: Not Supported 00:30:14.770 Multi-Domain Subsystem: Not Supported 00:30:14.770 Fixed Capacity Management: Not Supported 00:30:14.770 Variable Capacity Management: Not Supported 00:30:14.770 Delete Endurance Group: Not Supported 00:30:14.770 Delete NVM Set: Not Supported 00:30:14.770 Extended LBA Formats Supported: Supported 00:30:14.770 Flexible Data Placement Supported: Not Supported 00:30:14.770 00:30:14.770 Controller Memory Buffer Support 00:30:14.770 ================================ 00:30:14.770 Supported: No 00:30:14.770 00:30:14.770 Persistent Memory Region Support 00:30:14.770 ================================ 00:30:14.770 Supported: No 00:30:14.770 00:30:14.770 Admin Command Set Attributes 00:30:14.770 ============================ 00:30:14.770 Security Send/Receive: Not Supported 00:30:14.770 Format NVM: Supported 00:30:14.770 Firmware Activate/Download: Not Supported 00:30:14.770 Namespace Management: Supported 00:30:14.770 Device Self-Test: Not Supported 00:30:14.770 Directives: Supported 00:30:14.770 NVMe-MI: Not Supported 00:30:14.770 Virtualization Management: Not Supported 00:30:14.770 Doorbell Buffer Config: Supported 00:30:14.770 Get LBA Status Capability: Not Supported 00:30:14.770 Command & Feature Lockdown Capability: Not Supported 00:30:14.770 Abort Command Limit: 4 00:30:14.770 Async Event Request Limit: 4 00:30:14.770 Number of Firmware Slots: N/A 00:30:14.770 Firmware Slot 1 Read-Only: N/A 00:30:14.770 Firmware Activation Without Reset: N/A 00:30:14.770 Multiple Update Detection Support: N/A 00:30:14.770 Firmware Update Granularity: No Information Provided 00:30:14.770 Per-Namespace SMART Log: Yes 00:30:14.770 Asymmetric Namespace Access Log Page: Not Supported 00:30:14.770 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:14.770 Command Effects Log Page: Supported 00:30:14.770 Get Log Page Extended Data: Supported 00:30:14.770 Telemetry Log Pages: Not Supported 00:30:14.770 Persistent Event Log Pages: Not Supported 00:30:14.770 Supported Log Pages Log Page: May Support 00:30:14.770 Commands Supported & Effects Log Page: Not Supported 00:30:14.770 Feature Identifiers & Effects Log Page:May Support 00:30:14.770 NVMe-MI Commands & Effects Log Page: May Support 00:30:14.770 Data Area 4 for Telemetry Log: Not Supported 00:30:14.770 Error Log Page Entries Supported: 1 00:30:14.770 Keep Alive: Not Supported 00:30:14.770 00:30:14.770 NVM Command Set Attributes 00:30:14.770 ========================== 00:30:14.770 Submission Queue Entry Size 00:30:14.770 Max: 64 00:30:14.770 Min: 64 00:30:14.770 Completion Queue Entry Size 00:30:14.770 Max: 16 00:30:14.770 Min: 16 00:30:14.770 Number of Namespaces: 256 00:30:14.770 Compare Command: Supported 00:30:14.770 Write Uncorrectable Command: Not Supported 00:30:14.770 Dataset Management Command: Supported 00:30:14.770 Write Zeroes Command: Supported 00:30:14.770 Set Features Save Field: Supported 00:30:14.770 Reservations: Not Supported 00:30:14.770 Timestamp: Supported 00:30:14.770 Copy: Supported 00:30:14.770 Volatile Write Cache: Present 00:30:14.770 Atomic Write Unit (Normal): 1 00:30:14.770 Atomic Write Unit (PFail): 1 00:30:14.770 Atomic Compare & Write Unit: 1 00:30:14.770 Fused Compare & Write: Not Supported 00:30:14.770 Scatter-Gather List 00:30:14.770 SGL Command Set: Supported 00:30:14.770 SGL Keyed: Not Supported 00:30:14.770 SGL Bit Bucket Descriptor: Not Supported 00:30:14.770 SGL Metadata Pointer: Not Supported 00:30:14.770 Oversized SGL: Not Supported 00:30:14.770 SGL Metadata Address: Not Supported 00:30:14.770 SGL Offset: Not Supported 00:30:14.770 Transport SGL Data Block: Not Supported 00:30:14.770 Replay Protected Memory Block: Not Supported 00:30:14.770 00:30:14.770 Firmware Slot Information 00:30:14.770 ========================= 00:30:14.770 Active slot: 1 00:30:14.770 Slot 1 Firmware Revision: 1.0 00:30:14.770 00:30:14.770 00:30:14.770 Commands Supported and Effects 00:30:14.770 ============================== 00:30:14.770 Admin Commands 00:30:14.770 -------------- 00:30:14.770 Delete I/O Submission Queue (00h): Supported 00:30:14.770 Create I/O Submission Queue (01h): Supported 00:30:14.770 Get Log Page (02h): Supported 00:30:14.770 Delete I/O Completion Queue (04h): Supported 00:30:14.770 Create I/O Completion Queue (05h): Supported 00:30:14.770 Identify (06h): Supported 00:30:14.770 Abort (08h): Supported 00:30:14.770 Set Features (09h): Supported 00:30:14.770 Get Features (0Ah): Supported 00:30:14.770 Asynchronous Event Request (0Ch): Supported 00:30:14.770 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:14.770 Directive Send (19h): Supported 00:30:14.770 Directive Receive (1Ah): Supported 00:30:14.770 Virtualization Management (1Ch): Supported 00:30:14.770 Doorbell Buffer Config (7Ch): Supported 00:30:14.770 Format NVM (80h): Supported LBA-Change 00:30:14.770 I/O Commands 00:30:14.770 ------------ 00:30:14.770 Flush (00h): Supported LBA-Change 00:30:14.770 Write (01h): Supported LBA-Change 00:30:14.770 Read (02h): Supported 00:30:14.770 Compare (05h): Supported 00:30:14.770 Write Zeroes (08h): Supported LBA-Change 00:30:14.770 Dataset Management (09h): Supported LBA-Change 00:30:14.770 Unknown (0Ch): Supported 00:30:14.770 Unknown (12h): Supported 00:30:14.770 Copy (19h): Supported LBA-Change 00:30:14.770 Unknown (1Dh): Supported LBA-Change 00:30:14.770 00:30:14.770 Error Log 00:30:14.770 ========= 00:30:14.770 00:30:14.770 Arbitration 00:30:14.770 =========== 00:30:14.770 Arbitration Burst: no limit 00:30:14.770 00:30:14.770 Power Management 00:30:14.770 ================ 00:30:14.770 Number of Power States: 1 00:30:14.770 Current Power State: Power State #0 00:30:14.770 Power State #0: 00:30:14.770 Max Power: 25.00 W 00:30:14.770 Non-Operational State: Operational 00:30:14.770 Entry Latency: 16 microseconds 00:30:14.770 Exit Latency: 4 microseconds 00:30:14.770 Relative Read Throughput: 0 00:30:14.770 Relative Read Latency: 0 00:30:14.770 Relative Write Throughput: 0 00:30:14.770 Relative Write Latency: 0 00:30:14.770 Idle Power[2024-10-15 01:59:23.753106] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 64818 terminated unexpected 00:30:14.770 [2024-10-15 01:59:23.754298] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 64818 terminated unexpected 00:30:14.770 : Not Reported 00:30:14.770 Active Power: Not Reported 00:30:14.770 Non-Operational Permissive Mode: Not Supported 00:30:14.770 00:30:14.770 Health Information 00:30:14.770 ================== 00:30:14.770 Critical Warnings: 00:30:14.770 Available Spare Space: OK 00:30:14.770 Temperature: OK 00:30:14.770 Device Reliability: OK 00:30:14.770 Read Only: No 00:30:14.770 Volatile Memory Backup: OK 00:30:14.770 Current Temperature: 323 Kelvin (50 Celsius) 00:30:14.771 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:14.771 Available Spare: 0% 00:30:14.771 Available Spare Threshold: 0% 00:30:14.771 Life Percentage Used: 0% 00:30:14.771 Data Units Read: 674 00:30:14.771 Data Units Written: 603 00:30:14.771 Host Read Commands: 32348 00:30:14.771 Host Write Commands: 32134 00:30:14.771 Controller Busy Time: 0 minutes 00:30:14.771 Power Cycles: 0 00:30:14.771 Power On Hours: 0 hours 00:30:14.771 Unsafe Shutdowns: 0 00:30:14.771 Unrecoverable Media Errors: 0 00:30:14.771 Lifetime Error Log Entries: 0 00:30:14.771 Warning Temperature Time: 0 minutes 00:30:14.771 Critical Temperature Time: 0 minutes 00:30:14.771 00:30:14.771 Number of Queues 00:30:14.771 ================ 00:30:14.771 Number of I/O Submission Queues: 64 00:30:14.771 Number of I/O Completion Queues: 64 00:30:14.771 00:30:14.771 ZNS Specific Controller Data 00:30:14.771 ============================ 00:30:14.771 Zone Append Size Limit: 0 00:30:14.771 00:30:14.771 00:30:14.771 Active Namespaces 00:30:14.771 ================= 00:30:14.771 Namespace ID:1 00:30:14.771 Error Recovery Timeout: Unlimited 00:30:14.771 Command Set Identifier: NVM (00h) 00:30:14.771 Deallocate: Supported 00:30:14.771 Deallocated/Unwritten Error: Supported 00:30:14.771 Deallocated Read Value: All 0x00 00:30:14.771 Deallocate in Write Zeroes: Not Supported 00:30:14.771 Deallocated Guard Field: 0xFFFF 00:30:14.771 Flush: Supported 00:30:14.771 Reservation: Not Supported 00:30:14.771 Metadata Transferred as: Separate Metadata Buffer 00:30:14.771 Namespace Sharing Capabilities: Private 00:30:14.771 Size (in LBAs): 1548666 (5GiB) 00:30:14.771 Capacity (in LBAs): 1548666 (5GiB) 00:30:14.771 Utilization (in LBAs): 1548666 (5GiB) 00:30:14.771 Thin Provisioning: Not Supported 00:30:14.771 Per-NS Atomic Units: No 00:30:14.771 Maximum Single Source Range Length: 128 00:30:14.771 Maximum Copy Length: 128 00:30:14.771 Maximum Source Range Count: 128 00:30:14.771 NGUID/EUI64 Never Reused: No 00:30:14.771 Namespace Write Protected: No 00:30:14.771 Number of LBA Formats: 8 00:30:14.771 Current LBA Format: LBA Format #07 00:30:14.771 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.771 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:14.771 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:14.771 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:14.771 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:14.771 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:14.771 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:14.771 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:14.771 00:30:14.771 NVM Specific Namespace Data 00:30:14.771 =========================== 00:30:14.771 Logical Block Storage Tag Mask: 0 00:30:14.771 Protection Information Capabilities: 00:30:14.771 16b Guard Protection Information Storage Tag Support: No 00:30:14.771 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:14.771 Storage Tag Check Read Support: No 00:30:14.771 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.771 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.771 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.771 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.771 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.771 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.771 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.771 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.771 ===================================================== 00:30:14.771 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:14.771 ===================================================== 00:30:14.771 Controller Capabilities/Features 00:30:14.771 ================================ 00:30:14.771 Vendor ID: 1b36 00:30:14.771 Subsystem Vendor ID: 1af4 00:30:14.771 Serial Number: 12341 00:30:14.771 Model Number: QEMU NVMe Ctrl 00:30:14.771 Firmware Version: 8.0.0 00:30:14.771 Recommended Arb Burst: 6 00:30:14.771 IEEE OUI Identifier: 00 54 52 00:30:14.771 Multi-path I/O 00:30:14.771 May have multiple subsystem ports: No 00:30:14.771 May have multiple controllers: No 00:30:14.771 Associated with SR-IOV VF: No 00:30:14.771 Max Data Transfer Size: 524288 00:30:14.771 Max Number of Namespaces: 256 00:30:14.771 Max Number of I/O Queues: 64 00:30:14.771 NVMe Specification Version (VS): 1.4 00:30:14.771 NVMe Specification Version (Identify): 1.4 00:30:14.771 Maximum Queue Entries: 2048 00:30:14.771 Contiguous Queues Required: Yes 00:30:14.771 Arbitration Mechanisms Supported 00:30:14.771 Weighted Round Robin: Not Supported 00:30:14.771 Vendor Specific: Not Supported 00:30:14.771 Reset Timeout: 7500 ms 00:30:14.771 Doorbell Stride: 4 bytes 00:30:14.771 NVM Subsystem Reset: Not Supported 00:30:14.771 Command Sets Supported 00:30:14.771 NVM Command Set: Supported 00:30:14.771 Boot Partition: Not Supported 00:30:14.771 Memory Page Size Minimum: 4096 bytes 00:30:14.771 Memory Page Size Maximum: 65536 bytes 00:30:14.771 Persistent Memory Region: Not Supported 00:30:14.771 Optional Asynchronous Events Supported 00:30:14.771 Namespace Attribute Notices: Supported 00:30:14.771 Firmware Activation Notices: Not Supported 00:30:14.771 ANA Change Notices: Not Supported 00:30:14.771 PLE Aggregate Log Change Notices: Not Supported 00:30:14.771 LBA Status Info Alert Notices: Not Supported 00:30:14.771 EGE Aggregate Log Change Notices: Not Supported 00:30:14.771 Normal NVM Subsystem Shutdown event: Not Supported 00:30:14.771 Zone Descriptor Change Notices: Not Supported 00:30:14.771 Discovery Log Change Notices: Not Supported 00:30:14.771 Controller Attributes 00:30:14.771 128-bit Host Identifier: Not Supported 00:30:14.771 Non-Operational Permissive Mode: Not Supported 00:30:14.771 NVM Sets: Not Supported 00:30:14.771 Read Recovery Levels: Not Supported 00:30:14.771 Endurance Groups: Not Supported 00:30:14.771 Predictable Latency Mode: Not Supported 00:30:14.771 Traffic Based Keep ALive: Not Supported 00:30:14.771 Namespace Granularity: Not Supported 00:30:14.771 SQ Associations: Not Supported 00:30:14.771 UUID List: Not Supported 00:30:14.771 Multi-Domain Subsystem: Not Supported 00:30:14.771 Fixed Capacity Management: Not Supported 00:30:14.771 Variable Capacity Management: Not Supported 00:30:14.771 Delete Endurance Group: Not Supported 00:30:14.771 Delete NVM Set: Not Supported 00:30:14.771 Extended LBA Formats Supported: Supported 00:30:14.771 Flexible Data Placement Supported: Not Supported 00:30:14.771 00:30:14.771 Controller Memory Buffer Support 00:30:14.771 ================================ 00:30:14.771 Supported: No 00:30:14.771 00:30:14.771 Persistent Memory Region Support 00:30:14.771 ================================ 00:30:14.771 Supported: No 00:30:14.771 00:30:14.771 Admin Command Set Attributes 00:30:14.771 ============================ 00:30:14.771 Security Send/Receive: Not Supported 00:30:14.771 Format NVM: Supported 00:30:14.771 Firmware Activate/Download: Not Supported 00:30:14.771 Namespace Management: Supported 00:30:14.771 Device Self-Test: Not Supported 00:30:14.771 Directives: Supported 00:30:14.771 NVMe-MI: Not Supported 00:30:14.771 Virtualization Management: Not Supported 00:30:14.771 Doorbell Buffer Config: Supported 00:30:14.771 Get LBA Status Capability: Not Supported 00:30:14.771 Command & Feature Lockdown Capability: Not Supported 00:30:14.771 Abort Command Limit: 4 00:30:14.771 Async Event Request Limit: 4 00:30:14.771 Number of Firmware Slots: N/A 00:30:14.771 Firmware Slot 1 Read-Only: N/A 00:30:14.771 Firmware Activation Without Reset: N/A 00:30:14.771 Multiple Update Detection Support: N/A 00:30:14.771 Firmware Update Granularity: No Information Provided 00:30:14.771 Per-Namespace SMART Log: Yes 00:30:14.771 Asymmetric Namespace Access Log Page: Not Supported 00:30:14.771 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:30:14.771 Command Effects Log Page: Supported 00:30:14.771 Get Log Page Extended Data: Supported 00:30:14.771 Telemetry Log Pages: Not Supported 00:30:14.771 Persistent Event Log Pages: Not Supported 00:30:14.771 Supported Log Pages Log Page: May Support 00:30:14.771 Commands Supported & Effects Log Page: Not Supported 00:30:14.771 Feature Identifiers & Effects Log Page:May Support 00:30:14.771 NVMe-MI Commands & Effects Log Page: May Support 00:30:14.771 Data Area 4 for Telemetry Log: Not Supported 00:30:14.771 Error Log Page Entries Supported: 1 00:30:14.771 Keep Alive: Not Supported 00:30:14.771 00:30:14.771 NVM Command Set Attributes 00:30:14.771 ========================== 00:30:14.771 Submission Queue Entry Size 00:30:14.771 Max: 64 00:30:14.771 Min: 64 00:30:14.771 Completion Queue Entry Size 00:30:14.771 Max: 16 00:30:14.771 Min: 16 00:30:14.771 Number of Namespaces: 256 00:30:14.771 Compare Command: Supported 00:30:14.771 Write Uncorrectable Command: Not Supported 00:30:14.771 Dataset Management Command: Supported 00:30:14.771 Write Zeroes Command: Supported 00:30:14.771 Set Features Save Field: Supported 00:30:14.771 Reservations: Not Supported 00:30:14.771 Timestamp: Supported 00:30:14.772 Copy: Supported 00:30:14.772 Volatile Write Cache: Present 00:30:14.772 Atomic Write Unit (Normal): 1 00:30:14.772 Atomic Write Unit (PFail): 1 00:30:14.772 Atomic Compare & Write Unit: 1 00:30:14.772 Fused Compare & Write: Not Supported 00:30:14.772 Scatter-Gather List 00:30:14.772 SGL Command Set: Supported 00:30:14.772 SGL Keyed: Not Supported 00:30:14.772 SGL Bit Bucket Descriptor: Not Supported 00:30:14.772 SGL Metadata Pointer: Not Supported 00:30:14.772 Oversized SGL: Not Supported 00:30:14.772 SGL Metadata Address: Not Supported 00:30:14.772 SGL Offset: Not Supported 00:30:14.772 Transport SGL Data Block: Not Supported 00:30:14.772 Replay Protected Memory Block: Not Supported 00:30:14.772 00:30:14.772 Firmware Slot Information 00:30:14.772 ========================= 00:30:14.772 Active slot: 1 00:30:14.772 Slot 1 Firmware Revision: 1.0 00:30:14.772 00:30:14.772 00:30:14.772 Commands Supported and Effects 00:30:14.772 ============================== 00:30:14.772 Admin Commands 00:30:14.772 -------------- 00:30:14.772 Delete I/O Submission Queue (00h): Supported 00:30:14.772 Create I/O Submission Queue (01h): Supported 00:30:14.772 Get Log Page (02h): Supported 00:30:14.772 Delete I/O Completion Queue (04h): Supported 00:30:14.772 Create I/O Completion Queue (05h): Supported 00:30:14.772 Identify (06h): Supported 00:30:14.772 Abort (08h): Supported 00:30:14.772 Set Features (09h): Supported 00:30:14.772 Get Features (0Ah): Supported 00:30:14.772 Asynchronous Event Request (0Ch): Supported 00:30:14.772 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:14.772 Directive Send (19h): Supported 00:30:14.772 Directive Receive (1Ah): Supported 00:30:14.772 Virtualization Management (1Ch): Supported 00:30:14.772 Doorbell Buffer Config (7Ch): Supported 00:30:14.772 Format NVM (80h): Supported LBA-Change 00:30:14.772 I/O Commands 00:30:14.772 ------------ 00:30:14.772 Flush (00h): Supported LBA-Change 00:30:14.772 Write (01h): Supported LBA-Change 00:30:14.772 Read (02h): Supported 00:30:14.772 Compare (05h): Supported 00:30:14.772 Write Zeroes (08h): Supported LBA-Change 00:30:14.772 Dataset Management (09h): Supported LBA-Change 00:30:14.772 Unknown (0Ch): Supported 00:30:14.772 Unknown (12h): Supported 00:30:14.772 Copy (19h): Supported LBA-Change 00:30:14.772 Unknown (1Dh): Supported LBA-Change 00:30:14.772 00:30:14.772 Error Log 00:30:14.772 ========= 00:30:14.772 00:30:14.772 Arbitration 00:30:14.772 =========== 00:30:14.772 Arbitration Burst: no limit 00:30:14.772 00:30:14.772 Power Management 00:30:14.772 ================ 00:30:14.772 Number of Power States: 1 00:30:14.772 Current Power State: Power State #0 00:30:14.772 Power State #0: 00:30:14.772 Max Power: 25.00 W 00:30:14.772 Non-Operational State: Operational 00:30:14.772 Entry Latency: 16 microseconds 00:30:14.772 Exit Latency: 4 microseconds 00:30:14.772 Relative Read Throughput: 0 00:30:14.772 Relative Read Latency: 0 00:30:14.772 Relative Write Throughput: 0 00:30:14.772 Relative Write Latency: 0 00:30:14.772 Idle Power: Not Reported 00:30:14.772 Active Power: Not Reported 00:30:14.772 Non-Operational Permissive Mode: Not Supported 00:30:14.772 00:30:14.772 Health Information 00:30:14.772 ================== 00:30:14.772 Critical Warnings: 00:30:14.772 Available Spare Space: OK 00:30:14.772 Temperature: [2024-10-15 01:59:23.755424] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 64818 terminated unexpected 00:30:14.772 OK 00:30:14.772 Device Reliability: OK 00:30:14.772 Read Only: No 00:30:14.772 Volatile Memory Backup: OK 00:30:14.772 Current Temperature: 323 Kelvin (50 Celsius) 00:30:14.772 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:14.772 Available Spare: 0% 00:30:14.772 Available Spare Threshold: 0% 00:30:14.772 Life Percentage Used: 0% 00:30:14.772 Data Units Read: 999 00:30:14.772 Data Units Written: 872 00:30:14.772 Host Read Commands: 47337 00:30:14.772 Host Write Commands: 46243 00:30:14.772 Controller Busy Time: 0 minutes 00:30:14.772 Power Cycles: 0 00:30:14.772 Power On Hours: 0 hours 00:30:14.772 Unsafe Shutdowns: 0 00:30:14.772 Unrecoverable Media Errors: 0 00:30:14.772 Lifetime Error Log Entries: 0 00:30:14.772 Warning Temperature Time: 0 minutes 00:30:14.772 Critical Temperature Time: 0 minutes 00:30:14.772 00:30:14.772 Number of Queues 00:30:14.772 ================ 00:30:14.772 Number of I/O Submission Queues: 64 00:30:14.772 Number of I/O Completion Queues: 64 00:30:14.772 00:30:14.772 ZNS Specific Controller Data 00:30:14.772 ============================ 00:30:14.772 Zone Append Size Limit: 0 00:30:14.772 00:30:14.772 00:30:14.772 Active Namespaces 00:30:14.772 ================= 00:30:14.772 Namespace ID:1 00:30:14.772 Error Recovery Timeout: Unlimited 00:30:14.772 Command Set Identifier: NVM (00h) 00:30:14.772 Deallocate: Supported 00:30:14.772 Deallocated/Unwritten Error: Supported 00:30:14.772 Deallocated Read Value: All 0x00 00:30:14.772 Deallocate in Write Zeroes: Not Supported 00:30:14.772 Deallocated Guard Field: 0xFFFF 00:30:14.772 Flush: Supported 00:30:14.772 Reservation: Not Supported 00:30:14.772 Namespace Sharing Capabilities: Private 00:30:14.772 Size (in LBAs): 1310720 (5GiB) 00:30:14.772 Capacity (in LBAs): 1310720 (5GiB) 00:30:14.772 Utilization (in LBAs): 1310720 (5GiB) 00:30:14.772 Thin Provisioning: Not Supported 00:30:14.772 Per-NS Atomic Units: No 00:30:14.772 Maximum Single Source Range Length: 128 00:30:14.772 Maximum Copy Length: 128 00:30:14.772 Maximum Source Range Count: 128 00:30:14.772 NGUID/EUI64 Never Reused: No 00:30:14.772 Namespace Write Protected: No 00:30:14.772 Number of LBA Formats: 8 00:30:14.772 Current LBA Format: LBA Format #04 00:30:14.772 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.772 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:14.772 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:14.772 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:14.772 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:14.772 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:14.772 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:14.772 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:14.772 00:30:14.772 NVM Specific Namespace Data 00:30:14.772 =========================== 00:30:14.772 Logical Block Storage Tag Mask: 0 00:30:14.772 Protection Information Capabilities: 00:30:14.772 16b Guard Protection Information Storage Tag Support: No 00:30:14.772 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:14.772 Storage Tag Check Read Support: No 00:30:14.772 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.772 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.772 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.772 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.772 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.772 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.772 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.772 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.772 ===================================================== 00:30:14.772 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:14.772 ===================================================== 00:30:14.772 Controller Capabilities/Features 00:30:14.772 ================================ 00:30:14.772 Vendor ID: 1b36 00:30:14.772 Subsystem Vendor ID: 1af4 00:30:14.772 Serial Number: 12343 00:30:14.772 Model Number: QEMU NVMe Ctrl 00:30:14.772 Firmware Version: 8.0.0 00:30:14.772 Recommended Arb Burst: 6 00:30:14.772 IEEE OUI Identifier: 00 54 52 00:30:14.772 Multi-path I/O 00:30:14.772 May have multiple subsystem ports: No 00:30:14.772 May have multiple controllers: Yes 00:30:14.772 Associated with SR-IOV VF: No 00:30:14.772 Max Data Transfer Size: 524288 00:30:14.772 Max Number of Namespaces: 256 00:30:14.772 Max Number of I/O Queues: 64 00:30:14.772 NVMe Specification Version (VS): 1.4 00:30:14.772 NVMe Specification Version (Identify): 1.4 00:30:14.772 Maximum Queue Entries: 2048 00:30:14.772 Contiguous Queues Required: Yes 00:30:14.772 Arbitration Mechanisms Supported 00:30:14.772 Weighted Round Robin: Not Supported 00:30:14.772 Vendor Specific: Not Supported 00:30:14.772 Reset Timeout: 7500 ms 00:30:14.772 Doorbell Stride: 4 bytes 00:30:14.772 NVM Subsystem Reset: Not Supported 00:30:14.772 Command Sets Supported 00:30:14.772 NVM Command Set: Supported 00:30:14.772 Boot Partition: Not Supported 00:30:14.772 Memory Page Size Minimum: 4096 bytes 00:30:14.772 Memory Page Size Maximum: 65536 bytes 00:30:14.772 Persistent Memory Region: Not Supported 00:30:14.772 Optional Asynchronous Events Supported 00:30:14.772 Namespace Attribute Notices: Supported 00:30:14.772 Firmware Activation Notices: Not Supported 00:30:14.772 ANA Change Notices: Not Supported 00:30:14.772 PLE Aggregate Log Change Notices: Not Supported 00:30:14.772 LBA Status Info Alert Notices: Not Supported 00:30:14.772 EGE Aggregate Log Change Notices: Not Supported 00:30:14.772 Normal NVM Subsystem Shutdown event: Not Supported 00:30:14.772 Zone Descriptor Change Notices: Not Supported 00:30:14.772 Discovery Log Change Notices: Not Supported 00:30:14.772 Controller Attributes 00:30:14.773 128-bit Host Identifier: Not Supported 00:30:14.773 Non-Operational Permissive Mode: Not Supported 00:30:14.773 NVM Sets: Not Supported 00:30:14.773 Read Recovery Levels: Not Supported 00:30:14.773 Endurance Groups: Supported 00:30:14.773 Predictable Latency Mode: Not Supported 00:30:14.773 Traffic Based Keep ALive: Not Supported 00:30:14.773 Namespace Granularity: Not Supported 00:30:14.773 SQ Associations: Not Supported 00:30:14.773 UUID List: Not Supported 00:30:14.773 Multi-Domain Subsystem: Not Supported 00:30:14.773 Fixed Capacity Management: Not Supported 00:30:14.773 Variable Capacity Management: Not Supported 00:30:14.773 Delete Endurance Group: Not Supported 00:30:14.773 Delete NVM Set: Not Supported 00:30:14.773 Extended LBA Formats Supported: Supported 00:30:14.773 Flexible Data Placement Supported: Supported 00:30:14.773 00:30:14.773 Controller Memory Buffer Support 00:30:14.773 ================================ 00:30:14.773 Supported: No 00:30:14.773 00:30:14.773 Persistent Memory Region Support 00:30:14.773 ================================ 00:30:14.773 Supported: No 00:30:14.773 00:30:14.773 Admin Command Set Attributes 00:30:14.773 ============================ 00:30:14.773 Security Send/Receive: Not Supported 00:30:14.773 Format NVM: Supported 00:30:14.773 Firmware Activate/Download: Not Supported 00:30:14.773 Namespace Management: Supported 00:30:14.773 Device Self-Test: Not Supported 00:30:14.773 Directives: Supported 00:30:14.773 NVMe-MI: Not Supported 00:30:14.773 Virtualization Management: Not Supported 00:30:14.773 Doorbell Buffer Config: Supported 00:30:14.773 Get LBA Status Capability: Not Supported 00:30:14.773 Command & Feature Lockdown Capability: Not Supported 00:30:14.773 Abort Command Limit: 4 00:30:14.773 Async Event Request Limit: 4 00:30:14.773 Number of Firmware Slots: N/A 00:30:14.773 Firmware Slot 1 Read-Only: N/A 00:30:14.773 Firmware Activation Without Reset: N/A 00:30:14.773 Multiple Update Detection Support: N/A 00:30:14.773 Firmware Update Granularity: No Information Provided 00:30:14.773 Per-Namespace SMART Log: Yes 00:30:14.773 Asymmetric Namespace Access Log Page: Not Supported 00:30:14.773 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:30:14.773 Command Effects Log Page: Supported 00:30:14.773 Get Log Page Extended Data: Supported 00:30:14.773 Telemetry Log Pages: Not Supported 00:30:14.773 Persistent Event Log Pages: Not Supported 00:30:14.773 Supported Log Pages Log Page: May Support 00:30:14.773 Commands Supported & Effects Log Page: Not Supported 00:30:14.773 Feature Identifiers & Effects Log Page:May Support 00:30:14.773 NVMe-MI Commands & Effects Log Page: May Support 00:30:14.773 Data Area 4 for Telemetry Log: Not Supported 00:30:14.773 Error Log Page Entries Supported: 1 00:30:14.773 Keep Alive: Not Supported 00:30:14.773 00:30:14.773 NVM Command Set Attributes 00:30:14.773 ========================== 00:30:14.773 Submission Queue Entry Size 00:30:14.773 Max: 64 00:30:14.773 Min: 64 00:30:14.773 Completion Queue Entry Size 00:30:14.773 Max: 16 00:30:14.773 Min: 16 00:30:14.773 Number of Namespaces: 256 00:30:14.773 Compare Command: Supported 00:30:14.773 Write Uncorrectable Command: Not Supported 00:30:14.773 Dataset Management Command: Supported 00:30:14.773 Write Zeroes Command: Supported 00:30:14.773 Set Features Save Field: Supported 00:30:14.773 Reservations: Not Supported 00:30:14.773 Timestamp: Supported 00:30:14.773 Copy: Supported 00:30:14.773 Volatile Write Cache: Present 00:30:14.773 Atomic Write Unit (Normal): 1 00:30:14.773 Atomic Write Unit (PFail): 1 00:30:14.773 Atomic Compare & Write Unit: 1 00:30:14.773 Fused Compare & Write: Not Supported 00:30:14.773 Scatter-Gather List 00:30:14.773 SGL Command Set: Supported 00:30:14.773 SGL Keyed: Not Supported 00:30:14.773 SGL Bit Bucket Descriptor: Not Supported 00:30:14.773 SGL Metadata Pointer: Not Supported 00:30:14.773 Oversized SGL: Not Supported 00:30:14.773 SGL Metadata Address: Not Supported 00:30:14.773 SGL Offset: Not Supported 00:30:14.773 Transport SGL Data Block: Not Supported 00:30:14.773 Replay Protected Memory Block: Not Supported 00:30:14.773 00:30:14.773 Firmware Slot Information 00:30:14.773 ========================= 00:30:14.773 Active slot: 1 00:30:14.773 Slot 1 Firmware Revision: 1.0 00:30:14.773 00:30:14.773 00:30:14.773 Commands Supported and Effects 00:30:14.773 ============================== 00:30:14.773 Admin Commands 00:30:14.773 -------------- 00:30:14.773 Delete I/O Submission Queue (00h): Supported 00:30:14.773 Create I/O Submission Queue (01h): Supported 00:30:14.773 Get Log Page (02h): Supported 00:30:14.773 Delete I/O Completion Queue (04h): Supported 00:30:14.773 Create I/O Completion Queue (05h): Supported 00:30:14.773 Identify (06h): Supported 00:30:14.773 Abort (08h): Supported 00:30:14.773 Set Features (09h): Supported 00:30:14.773 Get Features (0Ah): Supported 00:30:14.773 Asynchronous Event Request (0Ch): Supported 00:30:14.773 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:14.773 Directive Send (19h): Supported 00:30:14.773 Directive Receive (1Ah): Supported 00:30:14.773 Virtualization Management (1Ch): Supported 00:30:14.773 Doorbell Buffer Config (7Ch): Supported 00:30:14.773 Format NVM (80h): Supported LBA-Change 00:30:14.773 I/O Commands 00:30:14.773 ------------ 00:30:14.773 Flush (00h): Supported LBA-Change 00:30:14.773 Write (01h): Supported LBA-Change 00:30:14.773 Read (02h): Supported 00:30:14.773 Compare (05h): Supported 00:30:14.773 Write Zeroes (08h): Supported LBA-Change 00:30:14.773 Dataset Management (09h): Supported LBA-Change 00:30:14.773 Unknown (0Ch): Supported 00:30:14.773 Unknown (12h): Supported 00:30:14.773 Copy (19h): Supported LBA-Change 00:30:14.773 Unknown (1Dh): Supported LBA-Change 00:30:14.773 00:30:14.773 Error Log 00:30:14.773 ========= 00:30:14.773 00:30:14.773 Arbitration 00:30:14.773 =========== 00:30:14.773 Arbitration Burst: no limit 00:30:14.773 00:30:14.773 Power Management 00:30:14.773 ================ 00:30:14.773 Number of Power States: 1 00:30:14.773 Current Power State: Power State #0 00:30:14.773 Power State #0: 00:30:14.773 Max Power: 25.00 W 00:30:14.773 Non-Operational State: Operational 00:30:14.773 Entry Latency: 16 microseconds 00:30:14.773 Exit Latency: 4 microseconds 00:30:14.773 Relative Read Throughput: 0 00:30:14.773 Relative Read Latency: 0 00:30:14.773 Relative Write Throughput: 0 00:30:14.773 Relative Write Latency: 0 00:30:14.773 Idle Power: Not Reported 00:30:14.773 Active Power: Not Reported 00:30:14.773 Non-Operational Permissive Mode: Not Supported 00:30:14.773 00:30:14.773 Health Information 00:30:14.773 ================== 00:30:14.773 Critical Warnings: 00:30:14.773 Available Spare Space: OK 00:30:14.773 Temperature: OK 00:30:14.773 Device Reliability: OK 00:30:14.773 Read Only: No 00:30:14.773 Volatile Memory Backup: OK 00:30:14.773 Current Temperature: 323 Kelvin (50 Celsius) 00:30:14.773 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:14.773 Available Spare: 0% 00:30:14.773 Available Spare Threshold: 0% 00:30:14.773 Life Percentage Used: 0% 00:30:14.773 Data Units Read: 774 00:30:14.773 Data Units Written: 703 00:30:14.773 Host Read Commands: 33425 00:30:14.773 Host Write Commands: 32848 00:30:14.773 Controller Busy Time: 0 minutes 00:30:14.773 Power Cycles: 0 00:30:14.773 Power On Hours: 0 hours 00:30:14.773 Unsafe Shutdowns: 0 00:30:14.773 Unrecoverable Media Errors: 0 00:30:14.773 Lifetime Error Log Entries: 0 00:30:14.773 Warning Temperature Time: 0 minutes 00:30:14.773 Critical Temperature Time: 0 minutes 00:30:14.773 00:30:14.773 Number of Queues 00:30:14.773 ================ 00:30:14.773 Number of I/O Submission Queues: 64 00:30:14.773 Number of I/O Completion Queues: 64 00:30:14.773 00:30:14.773 ZNS Specific Controller Data 00:30:14.773 ============================ 00:30:14.773 Zone Append Size Limit: 0 00:30:14.773 00:30:14.773 00:30:14.773 Active Namespaces 00:30:14.773 ================= 00:30:14.773 Namespace ID:1 00:30:14.773 Error Recovery Timeout: Unlimited 00:30:14.773 Command Set Identifier: NVM (00h) 00:30:14.773 Deallocate: Supported 00:30:14.773 Deallocated/Unwritten Error: Supported 00:30:14.773 Deallocated Read Value: All 0x00 00:30:14.773 Deallocate in Write Zeroes: Not Supported 00:30:14.773 Deallocated Guard Field: 0xFFFF 00:30:14.773 Flush: Supported 00:30:14.773 Reservation: Not Supported 00:30:14.773 Namespace Sharing Capabilities: Multiple Controllers 00:30:14.773 Size (in LBAs): 262144 (1GiB) 00:30:14.773 Capacity (in LBAs): 262144 (1GiB) 00:30:14.773 Utilization (in LBAs): 262144 (1GiB) 00:30:14.773 Thin Provisioning: Not Supported 00:30:14.773 Per-NS Atomic Units: No 00:30:14.774 Maximum Single Source Range Length: 128 00:30:14.774 Maximum Copy Length: 128 00:30:14.774 Maximum Source Range Count: 128 00:30:14.774 NGUID/EUI64 Never Reused: No 00:30:14.774 Namespace Write Protected: No 00:30:14.774 Endurance group ID: 1 00:30:14.774 Number of LBA Formats: 8 00:30:14.774 Current LBA Format: LBA Format #04 00:30:14.774 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.774 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:14.774 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:14.774 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:14.774 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:14.774 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:14.774 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:14.774 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:14.774 00:30:14.774 Get Feature FDP: 00:30:14.774 ================ 00:30:14.774 Enabled: Yes 00:30:14.774 FDP configuration index: 0 00:30:14.774 00:30:14.774 FDP configurations log page 00:30:14.774 =========================== 00:30:14.774 Number of FDP configurations: 1 00:30:14.774 Version: 0 00:30:14.774 Size: 112 00:30:14.774 FDP Configuration Descriptor: 0 00:30:14.774 Descriptor Size: 96 00:30:14.774 Reclaim Group Identifier format: 2 00:30:14.774 FDP Volatile Write Cache: Not Present 00:30:14.774 FDP Configuration: Valid 00:30:14.774 Vendor Specific Size: 0 00:30:14.774 Number of Reclaim Groups: 2 00:30:14.774 Number of Recalim Unit Handles: 8 00:30:14.774 Max Placement Identifiers: 128 00:30:14.774 Number of Namespaces Suppprted: 256 00:30:14.774 Reclaim unit Nominal Size: 6000000 bytes 00:30:14.774 Estimated Reclaim Unit Time Limit: Not Reported 00:30:14.774 RUH Desc #000: RUH Type: Initially Isolated 00:30:14.774 RUH Desc #001: RUH Type: Initially Isolated 00:30:14.774 RUH Desc #002: RUH Type: Initially Isolated 00:30:14.774 RUH Desc #003: RUH Type: Initially Isolated 00:30:14.774 RUH Desc #004: RUH Type: Initially Isolated 00:30:14.774 RUH Desc #005: RUH Type: Initially Isolated 00:30:14.774 RUH Desc #006: RUH Type: Initially Isolated 00:30:14.774 RUH Desc #007: RUH Type: Initially Isolated 00:30:14.774 00:30:14.774 FDP reclaim unit handle usage log page 00:30:14.774 ====================================== 00:30:14.774 Number of Reclaim Unit Handles: 8 00:30:14.774 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:30:14.774 RUH Usage Desc #001: RUH Attributes: Unused 00:30:14.774 RUH Usage Desc #002: RUH Attributes: Unused 00:30:14.774 RUH Usage Desc #003: RUH Attributes: Unused 00:30:14.774 RUH Usage Desc #004: RUH Attributes: Unused 00:30:14.774 RUH Usage Desc #005: RUH Attributes: Unused 00:30:14.774 RUH Usage Desc #006: RUH Attributes: Unused 00:30:14.774 RUH Usage Desc #007: RUH Attributes: Unused 00:30:14.774 00:30:14.774 FDP statistics log page 00:30:14.774 ======================= 00:30:14.774 Host bytes with metadata written: 441294848 00:30:14.774 Media[2024-10-15 01:59:23.757018] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 64818 terminated unexpected 00:30:14.774 bytes with metadata written: 441360384 00:30:14.774 Media bytes erased: 0 00:30:14.774 00:30:14.774 FDP events log page 00:30:14.774 =================== 00:30:14.774 Number of FDP events: 0 00:30:14.774 00:30:14.774 NVM Specific Namespace Data 00:30:14.774 =========================== 00:30:14.774 Logical Block Storage Tag Mask: 0 00:30:14.774 Protection Information Capabilities: 00:30:14.774 16b Guard Protection Information Storage Tag Support: No 00:30:14.774 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:14.774 Storage Tag Check Read Support: No 00:30:14.774 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.774 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.774 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.774 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.774 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.774 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.774 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.774 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.774 ===================================================== 00:30:14.774 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:14.774 ===================================================== 00:30:14.774 Controller Capabilities/Features 00:30:14.774 ================================ 00:30:14.774 Vendor ID: 1b36 00:30:14.774 Subsystem Vendor ID: 1af4 00:30:14.774 Serial Number: 12342 00:30:14.774 Model Number: QEMU NVMe Ctrl 00:30:14.774 Firmware Version: 8.0.0 00:30:14.774 Recommended Arb Burst: 6 00:30:14.774 IEEE OUI Identifier: 00 54 52 00:30:14.774 Multi-path I/O 00:30:14.774 May have multiple subsystem ports: No 00:30:14.774 May have multiple controllers: No 00:30:14.774 Associated with SR-IOV VF: No 00:30:14.774 Max Data Transfer Size: 524288 00:30:14.774 Max Number of Namespaces: 256 00:30:14.774 Max Number of I/O Queues: 64 00:30:14.774 NVMe Specification Version (VS): 1.4 00:30:14.774 NVMe Specification Version (Identify): 1.4 00:30:14.774 Maximum Queue Entries: 2048 00:30:14.774 Contiguous Queues Required: Yes 00:30:14.774 Arbitration Mechanisms Supported 00:30:14.774 Weighted Round Robin: Not Supported 00:30:14.774 Vendor Specific: Not Supported 00:30:14.774 Reset Timeout: 7500 ms 00:30:14.774 Doorbell Stride: 4 bytes 00:30:14.774 NVM Subsystem Reset: Not Supported 00:30:14.774 Command Sets Supported 00:30:14.774 NVM Command Set: Supported 00:30:14.774 Boot Partition: Not Supported 00:30:14.774 Memory Page Size Minimum: 4096 bytes 00:30:14.774 Memory Page Size Maximum: 65536 bytes 00:30:14.774 Persistent Memory Region: Not Supported 00:30:14.774 Optional Asynchronous Events Supported 00:30:14.774 Namespace Attribute Notices: Supported 00:30:14.774 Firmware Activation Notices: Not Supported 00:30:14.774 ANA Change Notices: Not Supported 00:30:14.774 PLE Aggregate Log Change Notices: Not Supported 00:30:14.774 LBA Status Info Alert Notices: Not Supported 00:30:14.774 EGE Aggregate Log Change Notices: Not Supported 00:30:14.774 Normal NVM Subsystem Shutdown event: Not Supported 00:30:14.774 Zone Descriptor Change Notices: Not Supported 00:30:14.774 Discovery Log Change Notices: Not Supported 00:30:14.774 Controller Attributes 00:30:14.774 128-bit Host Identifier: Not Supported 00:30:14.774 Non-Operational Permissive Mode: Not Supported 00:30:14.774 NVM Sets: Not Supported 00:30:14.774 Read Recovery Levels: Not Supported 00:30:14.774 Endurance Groups: Not Supported 00:30:14.774 Predictable Latency Mode: Not Supported 00:30:14.774 Traffic Based Keep ALive: Not Supported 00:30:14.774 Namespace Granularity: Not Supported 00:30:14.774 SQ Associations: Not Supported 00:30:14.774 UUID List: Not Supported 00:30:14.774 Multi-Domain Subsystem: Not Supported 00:30:14.774 Fixed Capacity Management: Not Supported 00:30:14.774 Variable Capacity Management: Not Supported 00:30:14.774 Delete Endurance Group: Not Supported 00:30:14.774 Delete NVM Set: Not Supported 00:30:14.774 Extended LBA Formats Supported: Supported 00:30:14.774 Flexible Data Placement Supported: Not Supported 00:30:14.774 00:30:14.774 Controller Memory Buffer Support 00:30:14.774 ================================ 00:30:14.774 Supported: No 00:30:14.774 00:30:14.774 Persistent Memory Region Support 00:30:14.774 ================================ 00:30:14.774 Supported: No 00:30:14.774 00:30:14.774 Admin Command Set Attributes 00:30:14.774 ============================ 00:30:14.774 Security Send/Receive: Not Supported 00:30:14.774 Format NVM: Supported 00:30:14.774 Firmware Activate/Download: Not Supported 00:30:14.774 Namespace Management: Supported 00:30:14.774 Device Self-Test: Not Supported 00:30:14.775 Directives: Supported 00:30:14.775 NVMe-MI: Not Supported 00:30:14.775 Virtualization Management: Not Supported 00:30:14.775 Doorbell Buffer Config: Supported 00:30:14.775 Get LBA Status Capability: Not Supported 00:30:14.775 Command & Feature Lockdown Capability: Not Supported 00:30:14.775 Abort Command Limit: 4 00:30:14.775 Async Event Request Limit: 4 00:30:14.775 Number of Firmware Slots: N/A 00:30:14.775 Firmware Slot 1 Read-Only: N/A 00:30:14.775 Firmware Activation Without Reset: N/A 00:30:14.775 Multiple Update Detection Support: N/A 00:30:14.775 Firmware Update Granularity: No Information Provided 00:30:14.775 Per-Namespace SMART Log: Yes 00:30:14.775 Asymmetric Namespace Access Log Page: Not Supported 00:30:14.775 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:30:14.775 Command Effects Log Page: Supported 00:30:14.775 Get Log Page Extended Data: Supported 00:30:14.775 Telemetry Log Pages: Not Supported 00:30:14.775 Persistent Event Log Pages: Not Supported 00:30:14.775 Supported Log Pages Log Page: May Support 00:30:14.775 Commands Supported & Effects Log Page: Not Supported 00:30:14.775 Feature Identifiers & Effects Log Page:May Support 00:30:14.775 NVMe-MI Commands & Effects Log Page: May Support 00:30:14.775 Data Area 4 for Telemetry Log: Not Supported 00:30:14.775 Error Log Page Entries Supported: 1 00:30:14.775 Keep Alive: Not Supported 00:30:14.775 00:30:14.775 NVM Command Set Attributes 00:30:14.775 ========================== 00:30:14.775 Submission Queue Entry Size 00:30:14.775 Max: 64 00:30:14.775 Min: 64 00:30:14.775 Completion Queue Entry Size 00:30:14.775 Max: 16 00:30:14.775 Min: 16 00:30:14.775 Number of Namespaces: 256 00:30:14.775 Compare Command: Supported 00:30:14.775 Write Uncorrectable Command: Not Supported 00:30:14.775 Dataset Management Command: Supported 00:30:14.775 Write Zeroes Command: Supported 00:30:14.775 Set Features Save Field: Supported 00:30:14.775 Reservations: Not Supported 00:30:14.775 Timestamp: Supported 00:30:14.775 Copy: Supported 00:30:14.775 Volatile Write Cache: Present 00:30:14.775 Atomic Write Unit (Normal): 1 00:30:14.775 Atomic Write Unit (PFail): 1 00:30:14.775 Atomic Compare & Write Unit: 1 00:30:14.775 Fused Compare & Write: Not Supported 00:30:14.775 Scatter-Gather List 00:30:14.775 SGL Command Set: Supported 00:30:14.775 SGL Keyed: Not Supported 00:30:14.775 SGL Bit Bucket Descriptor: Not Supported 00:30:14.775 SGL Metadata Pointer: Not Supported 00:30:14.775 Oversized SGL: Not Supported 00:30:14.775 SGL Metadata Address: Not Supported 00:30:14.775 SGL Offset: Not Supported 00:30:14.775 Transport SGL Data Block: Not Supported 00:30:14.775 Replay Protected Memory Block: Not Supported 00:30:14.775 00:30:14.775 Firmware Slot Information 00:30:14.775 ========================= 00:30:14.775 Active slot: 1 00:30:14.775 Slot 1 Firmware Revision: 1.0 00:30:14.775 00:30:14.775 00:30:14.775 Commands Supported and Effects 00:30:14.775 ============================== 00:30:14.775 Admin Commands 00:30:14.775 -------------- 00:30:14.775 Delete I/O Submission Queue (00h): Supported 00:30:14.775 Create I/O Submission Queue (01h): Supported 00:30:14.775 Get Log Page (02h): Supported 00:30:14.775 Delete I/O Completion Queue (04h): Supported 00:30:14.775 Create I/O Completion Queue (05h): Supported 00:30:14.775 Identify (06h): Supported 00:30:14.775 Abort (08h): Supported 00:30:14.775 Set Features (09h): Supported 00:30:14.775 Get Features (0Ah): Supported 00:30:14.775 Asynchronous Event Request (0Ch): Supported 00:30:14.775 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:14.775 Directive Send (19h): Supported 00:30:14.775 Directive Receive (1Ah): Supported 00:30:14.775 Virtualization Management (1Ch): Supported 00:30:14.775 Doorbell Buffer Config (7Ch): Supported 00:30:14.775 Format NVM (80h): Supported LBA-Change 00:30:14.775 I/O Commands 00:30:14.775 ------------ 00:30:14.775 Flush (00h): Supported LBA-Change 00:30:14.775 Write (01h): Supported LBA-Change 00:30:14.775 Read (02h): Supported 00:30:14.775 Compare (05h): Supported 00:30:14.775 Write Zeroes (08h): Supported LBA-Change 00:30:14.775 Dataset Management (09h): Supported LBA-Change 00:30:14.775 Unknown (0Ch): Supported 00:30:14.775 Unknown (12h): Supported 00:30:14.775 Copy (19h): Supported LBA-Change 00:30:14.775 Unknown (1Dh): Supported LBA-Change 00:30:14.775 00:30:14.775 Error Log 00:30:14.775 ========= 00:30:14.775 00:30:14.775 Arbitration 00:30:14.775 =========== 00:30:14.775 Arbitration Burst: no limit 00:30:14.775 00:30:14.775 Power Management 00:30:14.775 ================ 00:30:14.775 Number of Power States: 1 00:30:14.775 Current Power State: Power State #0 00:30:14.775 Power State #0: 00:30:14.775 Max Power: 25.00 W 00:30:14.775 Non-Operational State: Operational 00:30:14.775 Entry Latency: 16 microseconds 00:30:14.775 Exit Latency: 4 microseconds 00:30:14.775 Relative Read Throughput: 0 00:30:14.775 Relative Read Latency: 0 00:30:14.775 Relative Write Throughput: 0 00:30:14.775 Relative Write Latency: 0 00:30:14.775 Idle Power: Not Reported 00:30:14.775 Active Power: Not Reported 00:30:14.775 Non-Operational Permissive Mode: Not Supported 00:30:14.775 00:30:14.775 Health Information 00:30:14.775 ================== 00:30:14.775 Critical Warnings: 00:30:14.775 Available Spare Space: OK 00:30:14.775 Temperature: OK 00:30:14.775 Device Reliability: OK 00:30:14.775 Read Only: No 00:30:14.775 Volatile Memory Backup: OK 00:30:14.775 Current Temperature: 323 Kelvin (50 Celsius) 00:30:14.775 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:14.775 Available Spare: 0% 00:30:14.775 Available Spare Threshold: 0% 00:30:14.775 Life Percentage Used: 0% 00:30:14.775 Data Units Read: 2124 00:30:14.775 Data Units Written: 1911 00:30:14.775 Host Read Commands: 98735 00:30:14.775 Host Write Commands: 97004 00:30:14.775 Controller Busy Time: 0 minutes 00:30:14.775 Power Cycles: 0 00:30:14.775 Power On Hours: 0 hours 00:30:14.775 Unsafe Shutdowns: 0 00:30:14.775 Unrecoverable Media Errors: 0 00:30:14.775 Lifetime Error Log Entries: 0 00:30:14.775 Warning Temperature Time: 0 minutes 00:30:14.775 Critical Temperature Time: 0 minutes 00:30:14.775 00:30:14.775 Number of Queues 00:30:14.775 ================ 00:30:14.775 Number of I/O Submission Queues: 64 00:30:14.775 Number of I/O Completion Queues: 64 00:30:14.775 00:30:14.775 ZNS Specific Controller Data 00:30:14.775 ============================ 00:30:14.775 Zone Append Size Limit: 0 00:30:14.775 00:30:14.775 00:30:14.775 Active Namespaces 00:30:14.775 ================= 00:30:14.775 Namespace ID:1 00:30:14.775 Error Recovery Timeout: Unlimited 00:30:14.775 Command Set Identifier: NVM (00h) 00:30:14.775 Deallocate: Supported 00:30:14.775 Deallocated/Unwritten Error: Supported 00:30:14.775 Deallocated Read Value: All 0x00 00:30:14.775 Deallocate in Write Zeroes: Not Supported 00:30:14.775 Deallocated Guard Field: 0xFFFF 00:30:14.775 Flush: Supported 00:30:14.775 Reservation: Not Supported 00:30:14.775 Namespace Sharing Capabilities: Private 00:30:14.775 Size (in LBAs): 1048576 (4GiB) 00:30:14.775 Capacity (in LBAs): 1048576 (4GiB) 00:30:14.775 Utilization (in LBAs): 1048576 (4GiB) 00:30:14.775 Thin Provisioning: Not Supported 00:30:14.775 Per-NS Atomic Units: No 00:30:14.775 Maximum Single Source Range Length: 128 00:30:14.775 Maximum Copy Length: 128 00:30:14.775 Maximum Source Range Count: 128 00:30:14.775 NGUID/EUI64 Never Reused: No 00:30:14.775 Namespace Write Protected: No 00:30:14.775 Number of LBA Formats: 8 00:30:14.775 Current LBA Format: LBA Format #04 00:30:14.775 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.775 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:14.775 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:14.775 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:14.775 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:14.775 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:14.775 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:14.775 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:14.775 00:30:14.775 NVM Specific Namespace Data 00:30:14.775 =========================== 00:30:14.775 Logical Block Storage Tag Mask: 0 00:30:14.775 Protection Information Capabilities: 00:30:14.775 16b Guard Protection Information Storage Tag Support: No 00:30:14.775 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:14.775 Storage Tag Check Read Support: No 00:30:14.775 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.775 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.775 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.775 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.775 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.775 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.775 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.775 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.775 Namespace ID:2 00:30:14.775 Error Recovery Timeout: Unlimited 00:30:14.775 Command Set Identifier: NVM (00h) 00:30:14.775 Deallocate: Supported 00:30:14.775 Deallocated/Unwritten Error: Supported 00:30:14.775 Deallocated Read Value: All 0x00 00:30:14.775 Deallocate in Write Zeroes: Not Supported 00:30:14.776 Deallocated Guard Field: 0xFFFF 00:30:14.776 Flush: Supported 00:30:14.776 Reservation: Not Supported 00:30:14.776 Namespace Sharing Capabilities: Private 00:30:14.776 Size (in LBAs): 1048576 (4GiB) 00:30:14.776 Capacity (in LBAs): 1048576 (4GiB) 00:30:14.776 Utilization (in LBAs): 1048576 (4GiB) 00:30:14.776 Thin Provisioning: Not Supported 00:30:14.776 Per-NS Atomic Units: No 00:30:14.776 Maximum Single Source Range Length: 128 00:30:14.776 Maximum Copy Length: 128 00:30:14.776 Maximum Source Range Count: 128 00:30:14.776 NGUID/EUI64 Never Reused: No 00:30:14.776 Namespace Write Protected: No 00:30:14.776 Number of LBA Formats: 8 00:30:14.776 Current LBA Format: LBA Format #04 00:30:14.776 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.776 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:14.776 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:14.776 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:14.776 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:14.776 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:14.776 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:14.776 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:14.776 00:30:14.776 NVM Specific Namespace Data 00:30:14.776 =========================== 00:30:14.776 Logical Block Storage Tag Mask: 0 00:30:14.776 Protection Information Capabilities: 00:30:14.776 16b Guard Protection Information Storage Tag Support: No 00:30:14.776 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:14.776 Storage Tag Check Read Support: No 00:30:14.776 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.776 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.776 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.776 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.776 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.776 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.776 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.776 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.776 Namespace ID:3 00:30:14.776 Error Recovery Timeout: Unlimited 00:30:14.776 Command Set Identifier: NVM (00h) 00:30:14.776 Deallocate: Supported 00:30:14.776 Deallocated/Unwritten Error: Supported 00:30:14.776 Deallocated Read Value: All 0x00 00:30:14.776 Deallocate in Write Zeroes: Not Supported 00:30:14.776 Deallocated Guard Field: 0xFFFF 00:30:14.776 Flush: Supported 00:30:14.776 Reservation: Not Supported 00:30:14.776 Namespace Sharing Capabilities: Private 00:30:14.776 Size (in LBAs): 1048576 (4GiB) 00:30:15.035 Capacity (in LBAs): 1048576 (4GiB) 00:30:15.035 Utilization (in LBAs): 1048576 (4GiB) 00:30:15.035 Thin Provisioning: Not Supported 00:30:15.035 Per-NS Atomic Units: No 00:30:15.035 Maximum Single Source Range Length: 128 00:30:15.035 Maximum Copy Length: 128 00:30:15.035 Maximum Source Range Count: 128 00:30:15.035 NGUID/EUI64 Never Reused: No 00:30:15.035 Namespace Write Protected: No 00:30:15.035 Number of LBA Formats: 8 00:30:15.035 Current LBA Format: LBA Format #04 00:30:15.035 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.035 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.035 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.035 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.035 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.035 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.035 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.035 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.035 00:30:15.035 NVM Specific Namespace Data 00:30:15.035 =========================== 00:30:15.035 Logical Block Storage Tag Mask: 0 00:30:15.035 Protection Information Capabilities: 00:30:15.035 16b Guard Protection Information Storage Tag Support: No 00:30:15.035 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:15.035 Storage Tag Check Read Support: No 00:30:15.035 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.035 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.035 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.035 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.035 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.035 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.035 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.035 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.035 01:59:23 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:15.035 01:59:23 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:30:15.294 ===================================================== 00:30:15.294 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:15.294 ===================================================== 00:30:15.294 Controller Capabilities/Features 00:30:15.294 ================================ 00:30:15.294 Vendor ID: 1b36 00:30:15.294 Subsystem Vendor ID: 1af4 00:30:15.294 Serial Number: 12340 00:30:15.294 Model Number: QEMU NVMe Ctrl 00:30:15.294 Firmware Version: 8.0.0 00:30:15.294 Recommended Arb Burst: 6 00:30:15.294 IEEE OUI Identifier: 00 54 52 00:30:15.294 Multi-path I/O 00:30:15.294 May have multiple subsystem ports: No 00:30:15.294 May have multiple controllers: No 00:30:15.294 Associated with SR-IOV VF: No 00:30:15.294 Max Data Transfer Size: 524288 00:30:15.294 Max Number of Namespaces: 256 00:30:15.294 Max Number of I/O Queues: 64 00:30:15.294 NVMe Specification Version (VS): 1.4 00:30:15.294 NVMe Specification Version (Identify): 1.4 00:30:15.294 Maximum Queue Entries: 2048 00:30:15.294 Contiguous Queues Required: Yes 00:30:15.294 Arbitration Mechanisms Supported 00:30:15.294 Weighted Round Robin: Not Supported 00:30:15.294 Vendor Specific: Not Supported 00:30:15.294 Reset Timeout: 7500 ms 00:30:15.294 Doorbell Stride: 4 bytes 00:30:15.294 NVM Subsystem Reset: Not Supported 00:30:15.294 Command Sets Supported 00:30:15.294 NVM Command Set: Supported 00:30:15.294 Boot Partition: Not Supported 00:30:15.294 Memory Page Size Minimum: 4096 bytes 00:30:15.294 Memory Page Size Maximum: 65536 bytes 00:30:15.294 Persistent Memory Region: Not Supported 00:30:15.294 Optional Asynchronous Events Supported 00:30:15.294 Namespace Attribute Notices: Supported 00:30:15.294 Firmware Activation Notices: Not Supported 00:30:15.294 ANA Change Notices: Not Supported 00:30:15.294 PLE Aggregate Log Change Notices: Not Supported 00:30:15.294 LBA Status Info Alert Notices: Not Supported 00:30:15.294 EGE Aggregate Log Change Notices: Not Supported 00:30:15.294 Normal NVM Subsystem Shutdown event: Not Supported 00:30:15.294 Zone Descriptor Change Notices: Not Supported 00:30:15.294 Discovery Log Change Notices: Not Supported 00:30:15.294 Controller Attributes 00:30:15.294 128-bit Host Identifier: Not Supported 00:30:15.294 Non-Operational Permissive Mode: Not Supported 00:30:15.294 NVM Sets: Not Supported 00:30:15.294 Read Recovery Levels: Not Supported 00:30:15.294 Endurance Groups: Not Supported 00:30:15.294 Predictable Latency Mode: Not Supported 00:30:15.294 Traffic Based Keep ALive: Not Supported 00:30:15.294 Namespace Granularity: Not Supported 00:30:15.294 SQ Associations: Not Supported 00:30:15.294 UUID List: Not Supported 00:30:15.294 Multi-Domain Subsystem: Not Supported 00:30:15.294 Fixed Capacity Management: Not Supported 00:30:15.294 Variable Capacity Management: Not Supported 00:30:15.294 Delete Endurance Group: Not Supported 00:30:15.294 Delete NVM Set: Not Supported 00:30:15.294 Extended LBA Formats Supported: Supported 00:30:15.294 Flexible Data Placement Supported: Not Supported 00:30:15.294 00:30:15.294 Controller Memory Buffer Support 00:30:15.294 ================================ 00:30:15.294 Supported: No 00:30:15.294 00:30:15.294 Persistent Memory Region Support 00:30:15.294 ================================ 00:30:15.294 Supported: No 00:30:15.294 00:30:15.294 Admin Command Set Attributes 00:30:15.294 ============================ 00:30:15.294 Security Send/Receive: Not Supported 00:30:15.294 Format NVM: Supported 00:30:15.294 Firmware Activate/Download: Not Supported 00:30:15.294 Namespace Management: Supported 00:30:15.294 Device Self-Test: Not Supported 00:30:15.294 Directives: Supported 00:30:15.294 NVMe-MI: Not Supported 00:30:15.294 Virtualization Management: Not Supported 00:30:15.294 Doorbell Buffer Config: Supported 00:30:15.294 Get LBA Status Capability: Not Supported 00:30:15.294 Command & Feature Lockdown Capability: Not Supported 00:30:15.294 Abort Command Limit: 4 00:30:15.294 Async Event Request Limit: 4 00:30:15.294 Number of Firmware Slots: N/A 00:30:15.294 Firmware Slot 1 Read-Only: N/A 00:30:15.294 Firmware Activation Without Reset: N/A 00:30:15.294 Multiple Update Detection Support: N/A 00:30:15.294 Firmware Update Granularity: No Information Provided 00:30:15.294 Per-Namespace SMART Log: Yes 00:30:15.294 Asymmetric Namespace Access Log Page: Not Supported 00:30:15.294 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:15.294 Command Effects Log Page: Supported 00:30:15.294 Get Log Page Extended Data: Supported 00:30:15.294 Telemetry Log Pages: Not Supported 00:30:15.294 Persistent Event Log Pages: Not Supported 00:30:15.294 Supported Log Pages Log Page: May Support 00:30:15.294 Commands Supported & Effects Log Page: Not Supported 00:30:15.294 Feature Identifiers & Effects Log Page:May Support 00:30:15.294 NVMe-MI Commands & Effects Log Page: May Support 00:30:15.294 Data Area 4 for Telemetry Log: Not Supported 00:30:15.294 Error Log Page Entries Supported: 1 00:30:15.294 Keep Alive: Not Supported 00:30:15.294 00:30:15.294 NVM Command Set Attributes 00:30:15.294 ========================== 00:30:15.294 Submission Queue Entry Size 00:30:15.294 Max: 64 00:30:15.294 Min: 64 00:30:15.294 Completion Queue Entry Size 00:30:15.294 Max: 16 00:30:15.294 Min: 16 00:30:15.294 Number of Namespaces: 256 00:30:15.294 Compare Command: Supported 00:30:15.294 Write Uncorrectable Command: Not Supported 00:30:15.294 Dataset Management Command: Supported 00:30:15.294 Write Zeroes Command: Supported 00:30:15.294 Set Features Save Field: Supported 00:30:15.294 Reservations: Not Supported 00:30:15.294 Timestamp: Supported 00:30:15.294 Copy: Supported 00:30:15.294 Volatile Write Cache: Present 00:30:15.294 Atomic Write Unit (Normal): 1 00:30:15.294 Atomic Write Unit (PFail): 1 00:30:15.294 Atomic Compare & Write Unit: 1 00:30:15.294 Fused Compare & Write: Not Supported 00:30:15.294 Scatter-Gather List 00:30:15.294 SGL Command Set: Supported 00:30:15.294 SGL Keyed: Not Supported 00:30:15.294 SGL Bit Bucket Descriptor: Not Supported 00:30:15.294 SGL Metadata Pointer: Not Supported 00:30:15.294 Oversized SGL: Not Supported 00:30:15.294 SGL Metadata Address: Not Supported 00:30:15.294 SGL Offset: Not Supported 00:30:15.294 Transport SGL Data Block: Not Supported 00:30:15.294 Replay Protected Memory Block: Not Supported 00:30:15.294 00:30:15.294 Firmware Slot Information 00:30:15.294 ========================= 00:30:15.294 Active slot: 1 00:30:15.294 Slot 1 Firmware Revision: 1.0 00:30:15.294 00:30:15.294 00:30:15.294 Commands Supported and Effects 00:30:15.294 ============================== 00:30:15.294 Admin Commands 00:30:15.294 -------------- 00:30:15.294 Delete I/O Submission Queue (00h): Supported 00:30:15.294 Create I/O Submission Queue (01h): Supported 00:30:15.295 Get Log Page (02h): Supported 00:30:15.295 Delete I/O Completion Queue (04h): Supported 00:30:15.295 Create I/O Completion Queue (05h): Supported 00:30:15.295 Identify (06h): Supported 00:30:15.295 Abort (08h): Supported 00:30:15.295 Set Features (09h): Supported 00:30:15.295 Get Features (0Ah): Supported 00:30:15.295 Asynchronous Event Request (0Ch): Supported 00:30:15.295 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:15.295 Directive Send (19h): Supported 00:30:15.295 Directive Receive (1Ah): Supported 00:30:15.295 Virtualization Management (1Ch): Supported 00:30:15.295 Doorbell Buffer Config (7Ch): Supported 00:30:15.295 Format NVM (80h): Supported LBA-Change 00:30:15.295 I/O Commands 00:30:15.295 ------------ 00:30:15.295 Flush (00h): Supported LBA-Change 00:30:15.295 Write (01h): Supported LBA-Change 00:30:15.295 Read (02h): Supported 00:30:15.295 Compare (05h): Supported 00:30:15.295 Write Zeroes (08h): Supported LBA-Change 00:30:15.295 Dataset Management (09h): Supported LBA-Change 00:30:15.295 Unknown (0Ch): Supported 00:30:15.295 Unknown (12h): Supported 00:30:15.295 Copy (19h): Supported LBA-Change 00:30:15.295 Unknown (1Dh): Supported LBA-Change 00:30:15.295 00:30:15.295 Error Log 00:30:15.295 ========= 00:30:15.295 00:30:15.295 Arbitration 00:30:15.295 =========== 00:30:15.295 Arbitration Burst: no limit 00:30:15.295 00:30:15.295 Power Management 00:30:15.295 ================ 00:30:15.295 Number of Power States: 1 00:30:15.295 Current Power State: Power State #0 00:30:15.295 Power State #0: 00:30:15.295 Max Power: 25.00 W 00:30:15.295 Non-Operational State: Operational 00:30:15.295 Entry Latency: 16 microseconds 00:30:15.295 Exit Latency: 4 microseconds 00:30:15.295 Relative Read Throughput: 0 00:30:15.295 Relative Read Latency: 0 00:30:15.295 Relative Write Throughput: 0 00:30:15.295 Relative Write Latency: 0 00:30:15.295 Idle Power: Not Reported 00:30:15.295 Active Power: Not Reported 00:30:15.295 Non-Operational Permissive Mode: Not Supported 00:30:15.295 00:30:15.295 Health Information 00:30:15.295 ================== 00:30:15.295 Critical Warnings: 00:30:15.295 Available Spare Space: OK 00:30:15.295 Temperature: OK 00:30:15.295 Device Reliability: OK 00:30:15.295 Read Only: No 00:30:15.295 Volatile Memory Backup: OK 00:30:15.295 Current Temperature: 323 Kelvin (50 Celsius) 00:30:15.295 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:15.295 Available Spare: 0% 00:30:15.295 Available Spare Threshold: 0% 00:30:15.295 Life Percentage Used: 0% 00:30:15.295 Data Units Read: 674 00:30:15.295 Data Units Written: 603 00:30:15.295 Host Read Commands: 32348 00:30:15.295 Host Write Commands: 32134 00:30:15.295 Controller Busy Time: 0 minutes 00:30:15.295 Power Cycles: 0 00:30:15.295 Power On Hours: 0 hours 00:30:15.295 Unsafe Shutdowns: 0 00:30:15.295 Unrecoverable Media Errors: 0 00:30:15.295 Lifetime Error Log Entries: 0 00:30:15.295 Warning Temperature Time: 0 minutes 00:30:15.295 Critical Temperature Time: 0 minutes 00:30:15.295 00:30:15.295 Number of Queues 00:30:15.295 ================ 00:30:15.295 Number of I/O Submission Queues: 64 00:30:15.295 Number of I/O Completion Queues: 64 00:30:15.295 00:30:15.295 ZNS Specific Controller Data 00:30:15.295 ============================ 00:30:15.295 Zone Append Size Limit: 0 00:30:15.295 00:30:15.295 00:30:15.295 Active Namespaces 00:30:15.295 ================= 00:30:15.295 Namespace ID:1 00:30:15.295 Error Recovery Timeout: Unlimited 00:30:15.295 Command Set Identifier: NVM (00h) 00:30:15.295 Deallocate: Supported 00:30:15.295 Deallocated/Unwritten Error: Supported 00:30:15.295 Deallocated Read Value: All 0x00 00:30:15.295 Deallocate in Write Zeroes: Not Supported 00:30:15.295 Deallocated Guard Field: 0xFFFF 00:30:15.295 Flush: Supported 00:30:15.295 Reservation: Not Supported 00:30:15.295 Metadata Transferred as: Separate Metadata Buffer 00:30:15.295 Namespace Sharing Capabilities: Private 00:30:15.295 Size (in LBAs): 1548666 (5GiB) 00:30:15.295 Capacity (in LBAs): 1548666 (5GiB) 00:30:15.295 Utilization (in LBAs): 1548666 (5GiB) 00:30:15.295 Thin Provisioning: Not Supported 00:30:15.295 Per-NS Atomic Units: No 00:30:15.295 Maximum Single Source Range Length: 128 00:30:15.295 Maximum Copy Length: 128 00:30:15.295 Maximum Source Range Count: 128 00:30:15.295 NGUID/EUI64 Never Reused: No 00:30:15.295 Namespace Write Protected: No 00:30:15.295 Number of LBA Formats: 8 00:30:15.295 Current LBA Format: LBA Format #07 00:30:15.295 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.295 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.295 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.295 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.295 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.295 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.295 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.295 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.295 00:30:15.295 NVM Specific Namespace Data 00:30:15.295 =========================== 00:30:15.295 Logical Block Storage Tag Mask: 0 00:30:15.295 Protection Information Capabilities: 00:30:15.295 16b Guard Protection Information Storage Tag Support: No 00:30:15.295 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:15.295 Storage Tag Check Read Support: No 00:30:15.295 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.295 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.295 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.295 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.295 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.295 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.295 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.295 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.295 01:59:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:15.295 01:59:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:30:15.554 ===================================================== 00:30:15.554 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:15.554 ===================================================== 00:30:15.554 Controller Capabilities/Features 00:30:15.554 ================================ 00:30:15.554 Vendor ID: 1b36 00:30:15.554 Subsystem Vendor ID: 1af4 00:30:15.554 Serial Number: 12341 00:30:15.554 Model Number: QEMU NVMe Ctrl 00:30:15.554 Firmware Version: 8.0.0 00:30:15.554 Recommended Arb Burst: 6 00:30:15.554 IEEE OUI Identifier: 00 54 52 00:30:15.554 Multi-path I/O 00:30:15.554 May have multiple subsystem ports: No 00:30:15.554 May have multiple controllers: No 00:30:15.554 Associated with SR-IOV VF: No 00:30:15.554 Max Data Transfer Size: 524288 00:30:15.554 Max Number of Namespaces: 256 00:30:15.554 Max Number of I/O Queues: 64 00:30:15.554 NVMe Specification Version (VS): 1.4 00:30:15.554 NVMe Specification Version (Identify): 1.4 00:30:15.554 Maximum Queue Entries: 2048 00:30:15.554 Contiguous Queues Required: Yes 00:30:15.554 Arbitration Mechanisms Supported 00:30:15.554 Weighted Round Robin: Not Supported 00:30:15.554 Vendor Specific: Not Supported 00:30:15.554 Reset Timeout: 7500 ms 00:30:15.554 Doorbell Stride: 4 bytes 00:30:15.554 NVM Subsystem Reset: Not Supported 00:30:15.554 Command Sets Supported 00:30:15.554 NVM Command Set: Supported 00:30:15.554 Boot Partition: Not Supported 00:30:15.554 Memory Page Size Minimum: 4096 bytes 00:30:15.554 Memory Page Size Maximum: 65536 bytes 00:30:15.554 Persistent Memory Region: Not Supported 00:30:15.554 Optional Asynchronous Events Supported 00:30:15.554 Namespace Attribute Notices: Supported 00:30:15.554 Firmware Activation Notices: Not Supported 00:30:15.554 ANA Change Notices: Not Supported 00:30:15.554 PLE Aggregate Log Change Notices: Not Supported 00:30:15.554 LBA Status Info Alert Notices: Not Supported 00:30:15.554 EGE Aggregate Log Change Notices: Not Supported 00:30:15.554 Normal NVM Subsystem Shutdown event: Not Supported 00:30:15.554 Zone Descriptor Change Notices: Not Supported 00:30:15.554 Discovery Log Change Notices: Not Supported 00:30:15.554 Controller Attributes 00:30:15.554 128-bit Host Identifier: Not Supported 00:30:15.554 Non-Operational Permissive Mode: Not Supported 00:30:15.555 NVM Sets: Not Supported 00:30:15.555 Read Recovery Levels: Not Supported 00:30:15.555 Endurance Groups: Not Supported 00:30:15.555 Predictable Latency Mode: Not Supported 00:30:15.555 Traffic Based Keep ALive: Not Supported 00:30:15.555 Namespace Granularity: Not Supported 00:30:15.555 SQ Associations: Not Supported 00:30:15.555 UUID List: Not Supported 00:30:15.555 Multi-Domain Subsystem: Not Supported 00:30:15.555 Fixed Capacity Management: Not Supported 00:30:15.555 Variable Capacity Management: Not Supported 00:30:15.555 Delete Endurance Group: Not Supported 00:30:15.555 Delete NVM Set: Not Supported 00:30:15.555 Extended LBA Formats Supported: Supported 00:30:15.555 Flexible Data Placement Supported: Not Supported 00:30:15.555 00:30:15.555 Controller Memory Buffer Support 00:30:15.555 ================================ 00:30:15.555 Supported: No 00:30:15.555 00:30:15.555 Persistent Memory Region Support 00:30:15.555 ================================ 00:30:15.555 Supported: No 00:30:15.555 00:30:15.555 Admin Command Set Attributes 00:30:15.555 ============================ 00:30:15.555 Security Send/Receive: Not Supported 00:30:15.555 Format NVM: Supported 00:30:15.555 Firmware Activate/Download: Not Supported 00:30:15.555 Namespace Management: Supported 00:30:15.555 Device Self-Test: Not Supported 00:30:15.555 Directives: Supported 00:30:15.555 NVMe-MI: Not Supported 00:30:15.555 Virtualization Management: Not Supported 00:30:15.555 Doorbell Buffer Config: Supported 00:30:15.555 Get LBA Status Capability: Not Supported 00:30:15.555 Command & Feature Lockdown Capability: Not Supported 00:30:15.555 Abort Command Limit: 4 00:30:15.555 Async Event Request Limit: 4 00:30:15.555 Number of Firmware Slots: N/A 00:30:15.555 Firmware Slot 1 Read-Only: N/A 00:30:15.555 Firmware Activation Without Reset: N/A 00:30:15.555 Multiple Update Detection Support: N/A 00:30:15.555 Firmware Update Granularity: No Information Provided 00:30:15.555 Per-Namespace SMART Log: Yes 00:30:15.555 Asymmetric Namespace Access Log Page: Not Supported 00:30:15.555 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:30:15.555 Command Effects Log Page: Supported 00:30:15.555 Get Log Page Extended Data: Supported 00:30:15.555 Telemetry Log Pages: Not Supported 00:30:15.555 Persistent Event Log Pages: Not Supported 00:30:15.555 Supported Log Pages Log Page: May Support 00:30:15.555 Commands Supported & Effects Log Page: Not Supported 00:30:15.555 Feature Identifiers & Effects Log Page:May Support 00:30:15.555 NVMe-MI Commands & Effects Log Page: May Support 00:30:15.555 Data Area 4 for Telemetry Log: Not Supported 00:30:15.555 Error Log Page Entries Supported: 1 00:30:15.555 Keep Alive: Not Supported 00:30:15.555 00:30:15.555 NVM Command Set Attributes 00:30:15.555 ========================== 00:30:15.555 Submission Queue Entry Size 00:30:15.555 Max: 64 00:30:15.555 Min: 64 00:30:15.555 Completion Queue Entry Size 00:30:15.555 Max: 16 00:30:15.555 Min: 16 00:30:15.555 Number of Namespaces: 256 00:30:15.555 Compare Command: Supported 00:30:15.555 Write Uncorrectable Command: Not Supported 00:30:15.555 Dataset Management Command: Supported 00:30:15.555 Write Zeroes Command: Supported 00:30:15.555 Set Features Save Field: Supported 00:30:15.555 Reservations: Not Supported 00:30:15.555 Timestamp: Supported 00:30:15.555 Copy: Supported 00:30:15.555 Volatile Write Cache: Present 00:30:15.555 Atomic Write Unit (Normal): 1 00:30:15.555 Atomic Write Unit (PFail): 1 00:30:15.555 Atomic Compare & Write Unit: 1 00:30:15.555 Fused Compare & Write: Not Supported 00:30:15.555 Scatter-Gather List 00:30:15.555 SGL Command Set: Supported 00:30:15.555 SGL Keyed: Not Supported 00:30:15.555 SGL Bit Bucket Descriptor: Not Supported 00:30:15.555 SGL Metadata Pointer: Not Supported 00:30:15.555 Oversized SGL: Not Supported 00:30:15.555 SGL Metadata Address: Not Supported 00:30:15.555 SGL Offset: Not Supported 00:30:15.555 Transport SGL Data Block: Not Supported 00:30:15.555 Replay Protected Memory Block: Not Supported 00:30:15.555 00:30:15.555 Firmware Slot Information 00:30:15.555 ========================= 00:30:15.555 Active slot: 1 00:30:15.555 Slot 1 Firmware Revision: 1.0 00:30:15.555 00:30:15.555 00:30:15.555 Commands Supported and Effects 00:30:15.555 ============================== 00:30:15.555 Admin Commands 00:30:15.555 -------------- 00:30:15.555 Delete I/O Submission Queue (00h): Supported 00:30:15.555 Create I/O Submission Queue (01h): Supported 00:30:15.555 Get Log Page (02h): Supported 00:30:15.555 Delete I/O Completion Queue (04h): Supported 00:30:15.555 Create I/O Completion Queue (05h): Supported 00:30:15.555 Identify (06h): Supported 00:30:15.555 Abort (08h): Supported 00:30:15.555 Set Features (09h): Supported 00:30:15.555 Get Features (0Ah): Supported 00:30:15.555 Asynchronous Event Request (0Ch): Supported 00:30:15.555 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:15.555 Directive Send (19h): Supported 00:30:15.555 Directive Receive (1Ah): Supported 00:30:15.555 Virtualization Management (1Ch): Supported 00:30:15.555 Doorbell Buffer Config (7Ch): Supported 00:30:15.555 Format NVM (80h): Supported LBA-Change 00:30:15.555 I/O Commands 00:30:15.555 ------------ 00:30:15.555 Flush (00h): Supported LBA-Change 00:30:15.555 Write (01h): Supported LBA-Change 00:30:15.555 Read (02h): Supported 00:30:15.555 Compare (05h): Supported 00:30:15.555 Write Zeroes (08h): Supported LBA-Change 00:30:15.555 Dataset Management (09h): Supported LBA-Change 00:30:15.555 Unknown (0Ch): Supported 00:30:15.555 Unknown (12h): Supported 00:30:15.556 Copy (19h): Supported LBA-Change 00:30:15.556 Unknown (1Dh): Supported LBA-Change 00:30:15.556 00:30:15.556 Error Log 00:30:15.556 ========= 00:30:15.556 00:30:15.556 Arbitration 00:30:15.556 =========== 00:30:15.556 Arbitration Burst: no limit 00:30:15.556 00:30:15.556 Power Management 00:30:15.556 ================ 00:30:15.556 Number of Power States: 1 00:30:15.556 Current Power State: Power State #0 00:30:15.556 Power State #0: 00:30:15.556 Max Power: 25.00 W 00:30:15.556 Non-Operational State: Operational 00:30:15.556 Entry Latency: 16 microseconds 00:30:15.556 Exit Latency: 4 microseconds 00:30:15.556 Relative Read Throughput: 0 00:30:15.556 Relative Read Latency: 0 00:30:15.556 Relative Write Throughput: 0 00:30:15.556 Relative Write Latency: 0 00:30:15.556 Idle Power: Not Reported 00:30:15.556 Active Power: Not Reported 00:30:15.556 Non-Operational Permissive Mode: Not Supported 00:30:15.556 00:30:15.556 Health Information 00:30:15.556 ================== 00:30:15.556 Critical Warnings: 00:30:15.556 Available Spare Space: OK 00:30:15.556 Temperature: OK 00:30:15.556 Device Reliability: OK 00:30:15.556 Read Only: No 00:30:15.556 Volatile Memory Backup: OK 00:30:15.556 Current Temperature: 323 Kelvin (50 Celsius) 00:30:15.556 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:15.556 Available Spare: 0% 00:30:15.556 Available Spare Threshold: 0% 00:30:15.556 Life Percentage Used: 0% 00:30:15.556 Data Units Read: 999 00:30:15.556 Data Units Written: 872 00:30:15.556 Host Read Commands: 47337 00:30:15.556 Host Write Commands: 46243 00:30:15.556 Controller Busy Time: 0 minutes 00:30:15.556 Power Cycles: 0 00:30:15.556 Power On Hours: 0 hours 00:30:15.556 Unsafe Shutdowns: 0 00:30:15.556 Unrecoverable Media Errors: 0 00:30:15.556 Lifetime Error Log Entries: 0 00:30:15.556 Warning Temperature Time: 0 minutes 00:30:15.556 Critical Temperature Time: 0 minutes 00:30:15.556 00:30:15.556 Number of Queues 00:30:15.556 ================ 00:30:15.556 Number of I/O Submission Queues: 64 00:30:15.556 Number of I/O Completion Queues: 64 00:30:15.556 00:30:15.556 ZNS Specific Controller Data 00:30:15.556 ============================ 00:30:15.556 Zone Append Size Limit: 0 00:30:15.556 00:30:15.556 00:30:15.556 Active Namespaces 00:30:15.556 ================= 00:30:15.556 Namespace ID:1 00:30:15.556 Error Recovery Timeout: Unlimited 00:30:15.556 Command Set Identifier: NVM (00h) 00:30:15.556 Deallocate: Supported 00:30:15.556 Deallocated/Unwritten Error: Supported 00:30:15.556 Deallocated Read Value: All 0x00 00:30:15.556 Deallocate in Write Zeroes: Not Supported 00:30:15.556 Deallocated Guard Field: 0xFFFF 00:30:15.556 Flush: Supported 00:30:15.556 Reservation: Not Supported 00:30:15.556 Namespace Sharing Capabilities: Private 00:30:15.556 Size (in LBAs): 1310720 (5GiB) 00:30:15.556 Capacity (in LBAs): 1310720 (5GiB) 00:30:15.556 Utilization (in LBAs): 1310720 (5GiB) 00:30:15.556 Thin Provisioning: Not Supported 00:30:15.556 Per-NS Atomic Units: No 00:30:15.556 Maximum Single Source Range Length: 128 00:30:15.556 Maximum Copy Length: 128 00:30:15.556 Maximum Source Range Count: 128 00:30:15.556 NGUID/EUI64 Never Reused: No 00:30:15.556 Namespace Write Protected: No 00:30:15.556 Number of LBA Formats: 8 00:30:15.556 Current LBA Format: LBA Format #04 00:30:15.556 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.556 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.556 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.556 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.556 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.556 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.556 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.556 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.556 00:30:15.556 NVM Specific Namespace Data 00:30:15.556 =========================== 00:30:15.556 Logical Block Storage Tag Mask: 0 00:30:15.556 Protection Information Capabilities: 00:30:15.556 16b Guard Protection Information Storage Tag Support: No 00:30:15.556 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:15.556 Storage Tag Check Read Support: No 00:30:15.556 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.556 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.556 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.556 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.556 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.556 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.556 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.556 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.556 01:59:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:15.556 01:59:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:30:15.815 ===================================================== 00:30:15.815 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:15.815 ===================================================== 00:30:15.815 Controller Capabilities/Features 00:30:15.815 ================================ 00:30:15.815 Vendor ID: 1b36 00:30:15.815 Subsystem Vendor ID: 1af4 00:30:15.815 Serial Number: 12342 00:30:15.815 Model Number: QEMU NVMe Ctrl 00:30:15.815 Firmware Version: 8.0.0 00:30:15.815 Recommended Arb Burst: 6 00:30:15.815 IEEE OUI Identifier: 00 54 52 00:30:15.815 Multi-path I/O 00:30:15.815 May have multiple subsystem ports: No 00:30:15.815 May have multiple controllers: No 00:30:15.815 Associated with SR-IOV VF: No 00:30:15.815 Max Data Transfer Size: 524288 00:30:15.815 Max Number of Namespaces: 256 00:30:15.815 Max Number of I/O Queues: 64 00:30:15.815 NVMe Specification Version (VS): 1.4 00:30:15.815 NVMe Specification Version (Identify): 1.4 00:30:15.815 Maximum Queue Entries: 2048 00:30:15.815 Contiguous Queues Required: Yes 00:30:15.815 Arbitration Mechanisms Supported 00:30:15.815 Weighted Round Robin: Not Supported 00:30:15.815 Vendor Specific: Not Supported 00:30:15.815 Reset Timeout: 7500 ms 00:30:15.815 Doorbell Stride: 4 bytes 00:30:15.815 NVM Subsystem Reset: Not Supported 00:30:15.815 Command Sets Supported 00:30:15.815 NVM Command Set: Supported 00:30:15.815 Boot Partition: Not Supported 00:30:15.815 Memory Page Size Minimum: 4096 bytes 00:30:15.815 Memory Page Size Maximum: 65536 bytes 00:30:15.815 Persistent Memory Region: Not Supported 00:30:15.815 Optional Asynchronous Events Supported 00:30:15.815 Namespace Attribute Notices: Supported 00:30:15.815 Firmware Activation Notices: Not Supported 00:30:15.815 ANA Change Notices: Not Supported 00:30:15.815 PLE Aggregate Log Change Notices: Not Supported 00:30:15.815 LBA Status Info Alert Notices: Not Supported 00:30:15.815 EGE Aggregate Log Change Notices: Not Supported 00:30:15.815 Normal NVM Subsystem Shutdown event: Not Supported 00:30:15.815 Zone Descriptor Change Notices: Not Supported 00:30:15.815 Discovery Log Change Notices: Not Supported 00:30:15.815 Controller Attributes 00:30:15.815 128-bit Host Identifier: Not Supported 00:30:15.815 Non-Operational Permissive Mode: Not Supported 00:30:15.815 NVM Sets: Not Supported 00:30:15.815 Read Recovery Levels: Not Supported 00:30:15.815 Endurance Groups: Not Supported 00:30:15.816 Predictable Latency Mode: Not Supported 00:30:15.816 Traffic Based Keep ALive: Not Supported 00:30:15.816 Namespace Granularity: Not Supported 00:30:15.816 SQ Associations: Not Supported 00:30:15.816 UUID List: Not Supported 00:30:15.816 Multi-Domain Subsystem: Not Supported 00:30:15.816 Fixed Capacity Management: Not Supported 00:30:15.816 Variable Capacity Management: Not Supported 00:30:15.816 Delete Endurance Group: Not Supported 00:30:15.816 Delete NVM Set: Not Supported 00:30:15.816 Extended LBA Formats Supported: Supported 00:30:15.816 Flexible Data Placement Supported: Not Supported 00:30:15.816 00:30:15.816 Controller Memory Buffer Support 00:30:15.816 ================================ 00:30:15.816 Supported: No 00:30:15.816 00:30:15.816 Persistent Memory Region Support 00:30:15.816 ================================ 00:30:15.816 Supported: No 00:30:15.816 00:30:15.816 Admin Command Set Attributes 00:30:15.816 ============================ 00:30:15.816 Security Send/Receive: Not Supported 00:30:15.816 Format NVM: Supported 00:30:15.816 Firmware Activate/Download: Not Supported 00:30:15.816 Namespace Management: Supported 00:30:15.816 Device Self-Test: Not Supported 00:30:15.816 Directives: Supported 00:30:15.816 NVMe-MI: Not Supported 00:30:15.816 Virtualization Management: Not Supported 00:30:15.816 Doorbell Buffer Config: Supported 00:30:15.816 Get LBA Status Capability: Not Supported 00:30:15.816 Command & Feature Lockdown Capability: Not Supported 00:30:15.816 Abort Command Limit: 4 00:30:15.816 Async Event Request Limit: 4 00:30:15.816 Number of Firmware Slots: N/A 00:30:15.816 Firmware Slot 1 Read-Only: N/A 00:30:15.816 Firmware Activation Without Reset: N/A 00:30:15.816 Multiple Update Detection Support: N/A 00:30:15.816 Firmware Update Granularity: No Information Provided 00:30:15.816 Per-Namespace SMART Log: Yes 00:30:15.816 Asymmetric Namespace Access Log Page: Not Supported 00:30:15.816 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:30:15.816 Command Effects Log Page: Supported 00:30:15.816 Get Log Page Extended Data: Supported 00:30:15.816 Telemetry Log Pages: Not Supported 00:30:15.816 Persistent Event Log Pages: Not Supported 00:30:15.816 Supported Log Pages Log Page: May Support 00:30:15.816 Commands Supported & Effects Log Page: Not Supported 00:30:15.816 Feature Identifiers & Effects Log Page:May Support 00:30:15.816 NVMe-MI Commands & Effects Log Page: May Support 00:30:15.816 Data Area 4 for Telemetry Log: Not Supported 00:30:15.816 Error Log Page Entries Supported: 1 00:30:15.816 Keep Alive: Not Supported 00:30:15.816 00:30:15.816 NVM Command Set Attributes 00:30:15.816 ========================== 00:30:15.816 Submission Queue Entry Size 00:30:15.816 Max: 64 00:30:15.816 Min: 64 00:30:15.816 Completion Queue Entry Size 00:30:15.816 Max: 16 00:30:15.816 Min: 16 00:30:15.816 Number of Namespaces: 256 00:30:15.816 Compare Command: Supported 00:30:15.816 Write Uncorrectable Command: Not Supported 00:30:15.816 Dataset Management Command: Supported 00:30:15.816 Write Zeroes Command: Supported 00:30:15.816 Set Features Save Field: Supported 00:30:15.816 Reservations: Not Supported 00:30:15.816 Timestamp: Supported 00:30:15.816 Copy: Supported 00:30:15.816 Volatile Write Cache: Present 00:30:15.816 Atomic Write Unit (Normal): 1 00:30:15.816 Atomic Write Unit (PFail): 1 00:30:15.816 Atomic Compare & Write Unit: 1 00:30:15.816 Fused Compare & Write: Not Supported 00:30:15.816 Scatter-Gather List 00:30:15.816 SGL Command Set: Supported 00:30:15.816 SGL Keyed: Not Supported 00:30:15.816 SGL Bit Bucket Descriptor: Not Supported 00:30:15.816 SGL Metadata Pointer: Not Supported 00:30:15.816 Oversized SGL: Not Supported 00:30:15.816 SGL Metadata Address: Not Supported 00:30:15.816 SGL Offset: Not Supported 00:30:15.816 Transport SGL Data Block: Not Supported 00:30:15.816 Replay Protected Memory Block: Not Supported 00:30:15.816 00:30:15.816 Firmware Slot Information 00:30:15.816 ========================= 00:30:15.816 Active slot: 1 00:30:15.816 Slot 1 Firmware Revision: 1.0 00:30:15.816 00:30:15.816 00:30:15.816 Commands Supported and Effects 00:30:15.816 ============================== 00:30:15.816 Admin Commands 00:30:15.816 -------------- 00:30:15.816 Delete I/O Submission Queue (00h): Supported 00:30:15.816 Create I/O Submission Queue (01h): Supported 00:30:15.816 Get Log Page (02h): Supported 00:30:15.816 Delete I/O Completion Queue (04h): Supported 00:30:15.816 Create I/O Completion Queue (05h): Supported 00:30:15.816 Identify (06h): Supported 00:30:15.816 Abort (08h): Supported 00:30:15.816 Set Features (09h): Supported 00:30:15.816 Get Features (0Ah): Supported 00:30:15.816 Asynchronous Event Request (0Ch): Supported 00:30:15.816 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:15.816 Directive Send (19h): Supported 00:30:15.816 Directive Receive (1Ah): Supported 00:30:15.816 Virtualization Management (1Ch): Supported 00:30:15.816 Doorbell Buffer Config (7Ch): Supported 00:30:15.816 Format NVM (80h): Supported LBA-Change 00:30:15.816 I/O Commands 00:30:15.816 ------------ 00:30:15.816 Flush (00h): Supported LBA-Change 00:30:15.816 Write (01h): Supported LBA-Change 00:30:15.816 Read (02h): Supported 00:30:15.816 Compare (05h): Supported 00:30:15.816 Write Zeroes (08h): Supported LBA-Change 00:30:15.816 Dataset Management (09h): Supported LBA-Change 00:30:15.816 Unknown (0Ch): Supported 00:30:15.816 Unknown (12h): Supported 00:30:15.816 Copy (19h): Supported LBA-Change 00:30:15.816 Unknown (1Dh): Supported LBA-Change 00:30:15.816 00:30:15.816 Error Log 00:30:15.816 ========= 00:30:15.816 00:30:15.816 Arbitration 00:30:15.816 =========== 00:30:15.816 Arbitration Burst: no limit 00:30:15.816 00:30:15.816 Power Management 00:30:15.816 ================ 00:30:15.816 Number of Power States: 1 00:30:15.816 Current Power State: Power State #0 00:30:15.816 Power State #0: 00:30:15.816 Max Power: 25.00 W 00:30:15.816 Non-Operational State: Operational 00:30:15.816 Entry Latency: 16 microseconds 00:30:15.816 Exit Latency: 4 microseconds 00:30:15.816 Relative Read Throughput: 0 00:30:15.816 Relative Read Latency: 0 00:30:15.816 Relative Write Throughput: 0 00:30:15.816 Relative Write Latency: 0 00:30:15.816 Idle Power: Not Reported 00:30:15.816 Active Power: Not Reported 00:30:15.816 Non-Operational Permissive Mode: Not Supported 00:30:15.816 00:30:15.816 Health Information 00:30:15.816 ================== 00:30:15.816 Critical Warnings: 00:30:15.816 Available Spare Space: OK 00:30:15.816 Temperature: OK 00:30:15.816 Device Reliability: OK 00:30:15.816 Read Only: No 00:30:15.816 Volatile Memory Backup: OK 00:30:15.816 Current Temperature: 323 Kelvin (50 Celsius) 00:30:15.816 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:15.816 Available Spare: 0% 00:30:15.816 Available Spare Threshold: 0% 00:30:15.816 Life Percentage Used: 0% 00:30:15.816 Data Units Read: 2124 00:30:15.816 Data Units Written: 1911 00:30:15.816 Host Read Commands: 98735 00:30:15.816 Host Write Commands: 97004 00:30:15.816 Controller Busy Time: 0 minutes 00:30:15.816 Power Cycles: 0 00:30:15.816 Power On Hours: 0 hours 00:30:15.816 Unsafe Shutdowns: 0 00:30:15.816 Unrecoverable Media Errors: 0 00:30:15.816 Lifetime Error Log Entries: 0 00:30:15.816 Warning Temperature Time: 0 minutes 00:30:15.816 Critical Temperature Time: 0 minutes 00:30:15.816 00:30:15.816 Number of Queues 00:30:15.816 ================ 00:30:15.816 Number of I/O Submission Queues: 64 00:30:15.817 Number of I/O Completion Queues: 64 00:30:15.817 00:30:15.817 ZNS Specific Controller Data 00:30:15.817 ============================ 00:30:15.817 Zone Append Size Limit: 0 00:30:15.817 00:30:15.817 00:30:15.817 Active Namespaces 00:30:15.817 ================= 00:30:15.817 Namespace ID:1 00:30:15.817 Error Recovery Timeout: Unlimited 00:30:15.817 Command Set Identifier: NVM (00h) 00:30:15.817 Deallocate: Supported 00:30:15.817 Deallocated/Unwritten Error: Supported 00:30:15.817 Deallocated Read Value: All 0x00 00:30:15.817 Deallocate in Write Zeroes: Not Supported 00:30:15.817 Deallocated Guard Field: 0xFFFF 00:30:15.817 Flush: Supported 00:30:15.817 Reservation: Not Supported 00:30:15.817 Namespace Sharing Capabilities: Private 00:30:15.817 Size (in LBAs): 1048576 (4GiB) 00:30:15.817 Capacity (in LBAs): 1048576 (4GiB) 00:30:15.817 Utilization (in LBAs): 1048576 (4GiB) 00:30:15.817 Thin Provisioning: Not Supported 00:30:15.817 Per-NS Atomic Units: No 00:30:15.817 Maximum Single Source Range Length: 128 00:30:15.817 Maximum Copy Length: 128 00:30:15.817 Maximum Source Range Count: 128 00:30:15.817 NGUID/EUI64 Never Reused: No 00:30:15.817 Namespace Write Protected: No 00:30:15.817 Number of LBA Formats: 8 00:30:15.817 Current LBA Format: LBA Format #04 00:30:15.817 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.817 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.817 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.817 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.817 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.817 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.817 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.817 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.817 00:30:15.817 NVM Specific Namespace Data 00:30:15.817 =========================== 00:30:15.817 Logical Block Storage Tag Mask: 0 00:30:15.817 Protection Information Capabilities: 00:30:15.817 16b Guard Protection Information Storage Tag Support: No 00:30:15.817 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:15.817 Storage Tag Check Read Support: No 00:30:15.817 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Namespace ID:2 00:30:15.817 Error Recovery Timeout: Unlimited 00:30:15.817 Command Set Identifier: NVM (00h) 00:30:15.817 Deallocate: Supported 00:30:15.817 Deallocated/Unwritten Error: Supported 00:30:15.817 Deallocated Read Value: All 0x00 00:30:15.817 Deallocate in Write Zeroes: Not Supported 00:30:15.817 Deallocated Guard Field: 0xFFFF 00:30:15.817 Flush: Supported 00:30:15.817 Reservation: Not Supported 00:30:15.817 Namespace Sharing Capabilities: Private 00:30:15.817 Size (in LBAs): 1048576 (4GiB) 00:30:15.817 Capacity (in LBAs): 1048576 (4GiB) 00:30:15.817 Utilization (in LBAs): 1048576 (4GiB) 00:30:15.817 Thin Provisioning: Not Supported 00:30:15.817 Per-NS Atomic Units: No 00:30:15.817 Maximum Single Source Range Length: 128 00:30:15.817 Maximum Copy Length: 128 00:30:15.817 Maximum Source Range Count: 128 00:30:15.817 NGUID/EUI64 Never Reused: No 00:30:15.817 Namespace Write Protected: No 00:30:15.817 Number of LBA Formats: 8 00:30:15.817 Current LBA Format: LBA Format #04 00:30:15.817 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.817 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.817 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.817 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.817 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.817 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.817 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.817 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.817 00:30:15.817 NVM Specific Namespace Data 00:30:15.817 =========================== 00:30:15.817 Logical Block Storage Tag Mask: 0 00:30:15.817 Protection Information Capabilities: 00:30:15.817 16b Guard Protection Information Storage Tag Support: No 00:30:15.817 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:15.817 Storage Tag Check Read Support: No 00:30:15.817 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.817 Namespace ID:3 00:30:15.817 Error Recovery Timeout: Unlimited 00:30:15.817 Command Set Identifier: NVM (00h) 00:30:15.817 Deallocate: Supported 00:30:15.817 Deallocated/Unwritten Error: Supported 00:30:15.817 Deallocated Read Value: All 0x00 00:30:15.817 Deallocate in Write Zeroes: Not Supported 00:30:15.817 Deallocated Guard Field: 0xFFFF 00:30:15.817 Flush: Supported 00:30:15.817 Reservation: Not Supported 00:30:15.817 Namespace Sharing Capabilities: Private 00:30:15.817 Size (in LBAs): 1048576 (4GiB) 00:30:15.817 Capacity (in LBAs): 1048576 (4GiB) 00:30:15.817 Utilization (in LBAs): 1048576 (4GiB) 00:30:15.817 Thin Provisioning: Not Supported 00:30:15.817 Per-NS Atomic Units: No 00:30:15.817 Maximum Single Source Range Length: 128 00:30:15.817 Maximum Copy Length: 128 00:30:15.817 Maximum Source Range Count: 128 00:30:15.817 NGUID/EUI64 Never Reused: No 00:30:15.817 Namespace Write Protected: No 00:30:15.817 Number of LBA Formats: 8 00:30:15.817 Current LBA Format: LBA Format #04 00:30:15.817 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.817 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.817 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.817 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.817 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.817 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.817 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.817 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.817 00:30:15.817 NVM Specific Namespace Data 00:30:15.817 =========================== 00:30:15.817 Logical Block Storage Tag Mask: 0 00:30:15.817 Protection Information Capabilities: 00:30:15.817 16b Guard Protection Information Storage Tag Support: No 00:30:15.817 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:16.075 Storage Tag Check Read Support: No 00:30:16.075 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.075 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.075 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.075 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.075 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.075 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.075 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.075 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.075 01:59:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:16.075 01:59:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:30:16.334 ===================================================== 00:30:16.334 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:16.334 ===================================================== 00:30:16.334 Controller Capabilities/Features 00:30:16.334 ================================ 00:30:16.334 Vendor ID: 1b36 00:30:16.334 Subsystem Vendor ID: 1af4 00:30:16.334 Serial Number: 12343 00:30:16.334 Model Number: QEMU NVMe Ctrl 00:30:16.334 Firmware Version: 8.0.0 00:30:16.334 Recommended Arb Burst: 6 00:30:16.334 IEEE OUI Identifier: 00 54 52 00:30:16.334 Multi-path I/O 00:30:16.334 May have multiple subsystem ports: No 00:30:16.334 May have multiple controllers: Yes 00:30:16.334 Associated with SR-IOV VF: No 00:30:16.334 Max Data Transfer Size: 524288 00:30:16.334 Max Number of Namespaces: 256 00:30:16.334 Max Number of I/O Queues: 64 00:30:16.334 NVMe Specification Version (VS): 1.4 00:30:16.334 NVMe Specification Version (Identify): 1.4 00:30:16.334 Maximum Queue Entries: 2048 00:30:16.334 Contiguous Queues Required: Yes 00:30:16.334 Arbitration Mechanisms Supported 00:30:16.334 Weighted Round Robin: Not Supported 00:30:16.334 Vendor Specific: Not Supported 00:30:16.334 Reset Timeout: 7500 ms 00:30:16.334 Doorbell Stride: 4 bytes 00:30:16.334 NVM Subsystem Reset: Not Supported 00:30:16.334 Command Sets Supported 00:30:16.334 NVM Command Set: Supported 00:30:16.334 Boot Partition: Not Supported 00:30:16.334 Memory Page Size Minimum: 4096 bytes 00:30:16.334 Memory Page Size Maximum: 65536 bytes 00:30:16.334 Persistent Memory Region: Not Supported 00:30:16.334 Optional Asynchronous Events Supported 00:30:16.334 Namespace Attribute Notices: Supported 00:30:16.334 Firmware Activation Notices: Not Supported 00:30:16.334 ANA Change Notices: Not Supported 00:30:16.334 PLE Aggregate Log Change Notices: Not Supported 00:30:16.334 LBA Status Info Alert Notices: Not Supported 00:30:16.334 EGE Aggregate Log Change Notices: Not Supported 00:30:16.334 Normal NVM Subsystem Shutdown event: Not Supported 00:30:16.334 Zone Descriptor Change Notices: Not Supported 00:30:16.334 Discovery Log Change Notices: Not Supported 00:30:16.334 Controller Attributes 00:30:16.334 128-bit Host Identifier: Not Supported 00:30:16.334 Non-Operational Permissive Mode: Not Supported 00:30:16.334 NVM Sets: Not Supported 00:30:16.334 Read Recovery Levels: Not Supported 00:30:16.334 Endurance Groups: Supported 00:30:16.334 Predictable Latency Mode: Not Supported 00:30:16.334 Traffic Based Keep ALive: Not Supported 00:30:16.334 Namespace Granularity: Not Supported 00:30:16.334 SQ Associations: Not Supported 00:30:16.334 UUID List: Not Supported 00:30:16.334 Multi-Domain Subsystem: Not Supported 00:30:16.334 Fixed Capacity Management: Not Supported 00:30:16.334 Variable Capacity Management: Not Supported 00:30:16.334 Delete Endurance Group: Not Supported 00:30:16.334 Delete NVM Set: Not Supported 00:30:16.334 Extended LBA Formats Supported: Supported 00:30:16.334 Flexible Data Placement Supported: Supported 00:30:16.334 00:30:16.334 Controller Memory Buffer Support 00:30:16.334 ================================ 00:30:16.334 Supported: No 00:30:16.334 00:30:16.334 Persistent Memory Region Support 00:30:16.334 ================================ 00:30:16.334 Supported: No 00:30:16.334 00:30:16.334 Admin Command Set Attributes 00:30:16.334 ============================ 00:30:16.334 Security Send/Receive: Not Supported 00:30:16.334 Format NVM: Supported 00:30:16.334 Firmware Activate/Download: Not Supported 00:30:16.334 Namespace Management: Supported 00:30:16.334 Device Self-Test: Not Supported 00:30:16.334 Directives: Supported 00:30:16.334 NVMe-MI: Not Supported 00:30:16.334 Virtualization Management: Not Supported 00:30:16.334 Doorbell Buffer Config: Supported 00:30:16.334 Get LBA Status Capability: Not Supported 00:30:16.334 Command & Feature Lockdown Capability: Not Supported 00:30:16.334 Abort Command Limit: 4 00:30:16.334 Async Event Request Limit: 4 00:30:16.334 Number of Firmware Slots: N/A 00:30:16.334 Firmware Slot 1 Read-Only: N/A 00:30:16.334 Firmware Activation Without Reset: N/A 00:30:16.334 Multiple Update Detection Support: N/A 00:30:16.334 Firmware Update Granularity: No Information Provided 00:30:16.334 Per-Namespace SMART Log: Yes 00:30:16.334 Asymmetric Namespace Access Log Page: Not Supported 00:30:16.334 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:30:16.334 Command Effects Log Page: Supported 00:30:16.334 Get Log Page Extended Data: Supported 00:30:16.334 Telemetry Log Pages: Not Supported 00:30:16.334 Persistent Event Log Pages: Not Supported 00:30:16.334 Supported Log Pages Log Page: May Support 00:30:16.334 Commands Supported & Effects Log Page: Not Supported 00:30:16.334 Feature Identifiers & Effects Log Page:May Support 00:30:16.334 NVMe-MI Commands & Effects Log Page: May Support 00:30:16.334 Data Area 4 for Telemetry Log: Not Supported 00:30:16.334 Error Log Page Entries Supported: 1 00:30:16.334 Keep Alive: Not Supported 00:30:16.334 00:30:16.334 NVM Command Set Attributes 00:30:16.334 ========================== 00:30:16.334 Submission Queue Entry Size 00:30:16.334 Max: 64 00:30:16.334 Min: 64 00:30:16.334 Completion Queue Entry Size 00:30:16.334 Max: 16 00:30:16.334 Min: 16 00:30:16.334 Number of Namespaces: 256 00:30:16.334 Compare Command: Supported 00:30:16.334 Write Uncorrectable Command: Not Supported 00:30:16.334 Dataset Management Command: Supported 00:30:16.334 Write Zeroes Command: Supported 00:30:16.334 Set Features Save Field: Supported 00:30:16.334 Reservations: Not Supported 00:30:16.334 Timestamp: Supported 00:30:16.334 Copy: Supported 00:30:16.334 Volatile Write Cache: Present 00:30:16.334 Atomic Write Unit (Normal): 1 00:30:16.334 Atomic Write Unit (PFail): 1 00:30:16.334 Atomic Compare & Write Unit: 1 00:30:16.334 Fused Compare & Write: Not Supported 00:30:16.334 Scatter-Gather List 00:30:16.334 SGL Command Set: Supported 00:30:16.334 SGL Keyed: Not Supported 00:30:16.334 SGL Bit Bucket Descriptor: Not Supported 00:30:16.334 SGL Metadata Pointer: Not Supported 00:30:16.334 Oversized SGL: Not Supported 00:30:16.334 SGL Metadata Address: Not Supported 00:30:16.334 SGL Offset: Not Supported 00:30:16.334 Transport SGL Data Block: Not Supported 00:30:16.334 Replay Protected Memory Block: Not Supported 00:30:16.334 00:30:16.334 Firmware Slot Information 00:30:16.334 ========================= 00:30:16.334 Active slot: 1 00:30:16.334 Slot 1 Firmware Revision: 1.0 00:30:16.334 00:30:16.334 00:30:16.334 Commands Supported and Effects 00:30:16.334 ============================== 00:30:16.334 Admin Commands 00:30:16.334 -------------- 00:30:16.334 Delete I/O Submission Queue (00h): Supported 00:30:16.334 Create I/O Submission Queue (01h): Supported 00:30:16.334 Get Log Page (02h): Supported 00:30:16.334 Delete I/O Completion Queue (04h): Supported 00:30:16.334 Create I/O Completion Queue (05h): Supported 00:30:16.334 Identify (06h): Supported 00:30:16.334 Abort (08h): Supported 00:30:16.334 Set Features (09h): Supported 00:30:16.334 Get Features (0Ah): Supported 00:30:16.334 Asynchronous Event Request (0Ch): Supported 00:30:16.334 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:16.334 Directive Send (19h): Supported 00:30:16.334 Directive Receive (1Ah): Supported 00:30:16.334 Virtualization Management (1Ch): Supported 00:30:16.334 Doorbell Buffer Config (7Ch): Supported 00:30:16.334 Format NVM (80h): Supported LBA-Change 00:30:16.335 I/O Commands 00:30:16.335 ------------ 00:30:16.335 Flush (00h): Supported LBA-Change 00:30:16.335 Write (01h): Supported LBA-Change 00:30:16.335 Read (02h): Supported 00:30:16.335 Compare (05h): Supported 00:30:16.335 Write Zeroes (08h): Supported LBA-Change 00:30:16.335 Dataset Management (09h): Supported LBA-Change 00:30:16.335 Unknown (0Ch): Supported 00:30:16.335 Unknown (12h): Supported 00:30:16.335 Copy (19h): Supported LBA-Change 00:30:16.335 Unknown (1Dh): Supported LBA-Change 00:30:16.335 00:30:16.335 Error Log 00:30:16.335 ========= 00:30:16.335 00:30:16.335 Arbitration 00:30:16.335 =========== 00:30:16.335 Arbitration Burst: no limit 00:30:16.335 00:30:16.335 Power Management 00:30:16.335 ================ 00:30:16.335 Number of Power States: 1 00:30:16.335 Current Power State: Power State #0 00:30:16.335 Power State #0: 00:30:16.335 Max Power: 25.00 W 00:30:16.335 Non-Operational State: Operational 00:30:16.335 Entry Latency: 16 microseconds 00:30:16.335 Exit Latency: 4 microseconds 00:30:16.335 Relative Read Throughput: 0 00:30:16.335 Relative Read Latency: 0 00:30:16.335 Relative Write Throughput: 0 00:30:16.335 Relative Write Latency: 0 00:30:16.335 Idle Power: Not Reported 00:30:16.335 Active Power: Not Reported 00:30:16.335 Non-Operational Permissive Mode: Not Supported 00:30:16.335 00:30:16.335 Health Information 00:30:16.335 ================== 00:30:16.335 Critical Warnings: 00:30:16.335 Available Spare Space: OK 00:30:16.335 Temperature: OK 00:30:16.335 Device Reliability: OK 00:30:16.335 Read Only: No 00:30:16.335 Volatile Memory Backup: OK 00:30:16.335 Current Temperature: 323 Kelvin (50 Celsius) 00:30:16.335 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:16.335 Available Spare: 0% 00:30:16.335 Available Spare Threshold: 0% 00:30:16.335 Life Percentage Used: 0% 00:30:16.335 Data Units Read: 774 00:30:16.335 Data Units Written: 703 00:30:16.335 Host Read Commands: 33425 00:30:16.335 Host Write Commands: 32848 00:30:16.335 Controller Busy Time: 0 minutes 00:30:16.335 Power Cycles: 0 00:30:16.335 Power On Hours: 0 hours 00:30:16.335 Unsafe Shutdowns: 0 00:30:16.335 Unrecoverable Media Errors: 0 00:30:16.335 Lifetime Error Log Entries: 0 00:30:16.335 Warning Temperature Time: 0 minutes 00:30:16.335 Critical Temperature Time: 0 minutes 00:30:16.335 00:30:16.335 Number of Queues 00:30:16.335 ================ 00:30:16.335 Number of I/O Submission Queues: 64 00:30:16.335 Number of I/O Completion Queues: 64 00:30:16.335 00:30:16.335 ZNS Specific Controller Data 00:30:16.335 ============================ 00:30:16.335 Zone Append Size Limit: 0 00:30:16.335 00:30:16.335 00:30:16.335 Active Namespaces 00:30:16.335 ================= 00:30:16.335 Namespace ID:1 00:30:16.335 Error Recovery Timeout: Unlimited 00:30:16.335 Command Set Identifier: NVM (00h) 00:30:16.335 Deallocate: Supported 00:30:16.335 Deallocated/Unwritten Error: Supported 00:30:16.335 Deallocated Read Value: All 0x00 00:30:16.335 Deallocate in Write Zeroes: Not Supported 00:30:16.335 Deallocated Guard Field: 0xFFFF 00:30:16.335 Flush: Supported 00:30:16.335 Reservation: Not Supported 00:30:16.335 Namespace Sharing Capabilities: Multiple Controllers 00:30:16.335 Size (in LBAs): 262144 (1GiB) 00:30:16.335 Capacity (in LBAs): 262144 (1GiB) 00:30:16.335 Utilization (in LBAs): 262144 (1GiB) 00:30:16.335 Thin Provisioning: Not Supported 00:30:16.335 Per-NS Atomic Units: No 00:30:16.335 Maximum Single Source Range Length: 128 00:30:16.335 Maximum Copy Length: 128 00:30:16.335 Maximum Source Range Count: 128 00:30:16.335 NGUID/EUI64 Never Reused: No 00:30:16.335 Namespace Write Protected: No 00:30:16.335 Endurance group ID: 1 00:30:16.335 Number of LBA Formats: 8 00:30:16.335 Current LBA Format: LBA Format #04 00:30:16.335 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:16.335 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:16.335 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:16.335 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:16.335 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:16.335 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:16.335 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:16.335 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:16.335 00:30:16.335 Get Feature FDP: 00:30:16.335 ================ 00:30:16.335 Enabled: Yes 00:30:16.335 FDP configuration index: 0 00:30:16.335 00:30:16.335 FDP configurations log page 00:30:16.335 =========================== 00:30:16.335 Number of FDP configurations: 1 00:30:16.335 Version: 0 00:30:16.335 Size: 112 00:30:16.335 FDP Configuration Descriptor: 0 00:30:16.335 Descriptor Size: 96 00:30:16.335 Reclaim Group Identifier format: 2 00:30:16.335 FDP Volatile Write Cache: Not Present 00:30:16.335 FDP Configuration: Valid 00:30:16.335 Vendor Specific Size: 0 00:30:16.335 Number of Reclaim Groups: 2 00:30:16.335 Number of Recalim Unit Handles: 8 00:30:16.335 Max Placement Identifiers: 128 00:30:16.335 Number of Namespaces Suppprted: 256 00:30:16.335 Reclaim unit Nominal Size: 6000000 bytes 00:30:16.335 Estimated Reclaim Unit Time Limit: Not Reported 00:30:16.335 RUH Desc #000: RUH Type: Initially Isolated 00:30:16.335 RUH Desc #001: RUH Type: Initially Isolated 00:30:16.335 RUH Desc #002: RUH Type: Initially Isolated 00:30:16.335 RUH Desc #003: RUH Type: Initially Isolated 00:30:16.335 RUH Desc #004: RUH Type: Initially Isolated 00:30:16.335 RUH Desc #005: RUH Type: Initially Isolated 00:30:16.335 RUH Desc #006: RUH Type: Initially Isolated 00:30:16.335 RUH Desc #007: RUH Type: Initially Isolated 00:30:16.335 00:30:16.335 FDP reclaim unit handle usage log page 00:30:16.335 ====================================== 00:30:16.335 Number of Reclaim Unit Handles: 8 00:30:16.335 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:30:16.335 RUH Usage Desc #001: RUH Attributes: Unused 00:30:16.335 RUH Usage Desc #002: RUH Attributes: Unused 00:30:16.335 RUH Usage Desc #003: RUH Attributes: Unused 00:30:16.335 RUH Usage Desc #004: RUH Attributes: Unused 00:30:16.335 RUH Usage Desc #005: RUH Attributes: Unused 00:30:16.335 RUH Usage Desc #006: RUH Attributes: Unused 00:30:16.335 RUH Usage Desc #007: RUH Attributes: Unused 00:30:16.335 00:30:16.335 FDP statistics log page 00:30:16.335 ======================= 00:30:16.335 Host bytes with metadata written: 441294848 00:30:16.335 Media bytes with metadata written: 441360384 00:30:16.335 Media bytes erased: 0 00:30:16.335 00:30:16.335 FDP events log page 00:30:16.335 =================== 00:30:16.335 Number of FDP events: 0 00:30:16.335 00:30:16.335 NVM Specific Namespace Data 00:30:16.335 =========================== 00:30:16.335 Logical Block Storage Tag Mask: 0 00:30:16.335 Protection Information Capabilities: 00:30:16.335 16b Guard Protection Information Storage Tag Support: No 00:30:16.335 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:16.335 Storage Tag Check Read Support: No 00:30:16.335 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.335 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.335 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.335 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.335 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.335 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.335 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.335 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:16.335 ************************************ 00:30:16.335 END TEST nvme_identify 00:30:16.335 ************************************ 00:30:16.335 00:30:16.335 real 0m1.721s 00:30:16.335 user 0m0.643s 00:30:16.335 sys 0m0.867s 00:30:16.335 01:59:25 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:16.335 01:59:25 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:30:16.335 01:59:25 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:16.335 01:59:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:16.335 01:59:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:16.335 01:59:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:16.335 ************************************ 00:30:16.335 START TEST nvme_perf 00:30:16.335 ************************************ 00:30:16.335 01:59:25 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:30:16.335 01:59:25 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:17.731 Initializing NVMe Controllers 00:30:17.731 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:17.731 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:17.731 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:17.731 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:17.731 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:30:17.731 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:30:17.731 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:30:17.731 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:30:17.731 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:30:17.731 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:30:17.731 Initialization complete. Launching workers. 00:30:17.731 ======================================================== 00:30:17.731 Latency(us) 00:30:17.731 Device Information : IOPS MiB/s Average min max 00:30:17.731 PCIE (0000:00:10.0) NSID 1 from core 0: 12348.83 144.71 10381.36 7866.98 39850.00 00:30:17.731 PCIE (0000:00:11.0) NSID 1 from core 0: 12348.83 144.71 10360.79 7948.28 37390.95 00:30:17.731 PCIE (0000:00:13.0) NSID 1 from core 0: 12348.83 144.71 10337.84 7993.75 35521.31 00:30:17.731 PCIE (0000:00:12.0) NSID 1 from core 0: 12348.83 144.71 10314.74 7991.16 33069.30 00:30:17.731 PCIE (0000:00:12.0) NSID 2 from core 0: 12348.83 144.71 10291.39 8002.62 30666.47 00:30:17.731 PCIE (0000:00:12.0) NSID 3 from core 0: 12412.81 145.46 10214.59 8003.38 23553.82 00:30:17.731 ======================================================== 00:30:17.731 Total : 74156.94 869.03 10316.70 7866.98 39850.00 00:30:17.731 00:30:17.731 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:17.731 ================================================================================= 00:30:17.731 1.00000% : 8340.945us 00:30:17.731 10.00000% : 9234.618us 00:30:17.731 25.00000% : 9532.509us 00:30:17.731 50.00000% : 10009.135us 00:30:17.731 75.00000% : 10485.760us 00:30:17.731 90.00000% : 11617.745us 00:30:17.731 95.00000% : 12392.262us 00:30:17.731 98.00000% : 13822.138us 00:30:17.731 99.00000% : 31695.593us 00:30:17.731 99.50000% : 37891.724us 00:30:17.731 99.90000% : 39559.913us 00:30:17.731 99.99000% : 39798.225us 00:30:17.731 99.99900% : 40036.538us 00:30:17.731 99.99990% : 40036.538us 00:30:17.731 99.99999% : 40036.538us 00:30:17.731 00:30:17.731 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:17.731 ================================================================================= 00:30:17.731 1.00000% : 8400.524us 00:30:17.731 10.00000% : 9294.196us 00:30:17.731 25.00000% : 9592.087us 00:30:17.731 50.00000% : 9949.556us 00:30:17.731 75.00000% : 10426.182us 00:30:17.731 90.00000% : 11617.745us 00:30:17.731 95.00000% : 12332.684us 00:30:17.731 98.00000% : 13583.825us 00:30:17.731 99.00000% : 29431.622us 00:30:17.731 99.50000% : 35508.596us 00:30:17.731 99.90000% : 37176.785us 00:30:17.731 99.99000% : 37415.098us 00:30:17.731 99.99900% : 37415.098us 00:30:17.731 99.99990% : 37415.098us 00:30:17.731 99.99999% : 37415.098us 00:30:17.731 00:30:17.731 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:17.731 ================================================================================= 00:30:17.731 1.00000% : 8400.524us 00:30:17.731 10.00000% : 9294.196us 00:30:17.731 25.00000% : 9592.087us 00:30:17.731 50.00000% : 9949.556us 00:30:17.731 75.00000% : 10426.182us 00:30:17.731 90.00000% : 11677.324us 00:30:17.731 95.00000% : 12392.262us 00:30:17.731 98.00000% : 13702.982us 00:30:17.731 99.00000% : 27525.120us 00:30:17.731 99.50000% : 33602.095us 00:30:17.731 99.90000% : 35270.284us 00:30:17.731 99.99000% : 35508.596us 00:30:17.731 99.99900% : 35746.909us 00:30:17.731 99.99990% : 35746.909us 00:30:17.731 99.99999% : 35746.909us 00:30:17.731 00:30:17.731 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:17.731 ================================================================================= 00:30:17.731 1.00000% : 8400.524us 00:30:17.731 10.00000% : 9294.196us 00:30:17.731 25.00000% : 9592.087us 00:30:17.731 50.00000% : 10009.135us 00:30:17.731 75.00000% : 10426.182us 00:30:17.731 90.00000% : 11677.324us 00:30:17.731 95.00000% : 12332.684us 00:30:17.731 98.00000% : 13762.560us 00:30:17.731 99.00000% : 25022.836us 00:30:17.731 99.50000% : 31218.967us 00:30:17.731 99.90000% : 32887.156us 00:30:17.731 99.99000% : 33125.469us 00:30:17.731 99.99900% : 33125.469us 00:30:17.731 99.99990% : 33125.469us 00:30:17.731 99.99999% : 33125.469us 00:30:17.731 00:30:17.731 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:17.731 ================================================================================= 00:30:17.731 1.00000% : 8400.524us 00:30:17.731 10.00000% : 9294.196us 00:30:17.731 25.00000% : 9592.087us 00:30:17.731 50.00000% : 10009.135us 00:30:17.731 75.00000% : 10426.182us 00:30:17.731 90.00000% : 11677.324us 00:30:17.731 95.00000% : 12332.684us 00:30:17.731 98.00000% : 13822.138us 00:30:17.731 99.00000% : 22520.553us 00:30:17.731 99.50000% : 28716.684us 00:30:17.731 99.90000% : 30384.873us 00:30:17.731 99.99000% : 30742.342us 00:30:17.731 99.99900% : 30742.342us 00:30:17.731 99.99990% : 30742.342us 00:30:17.731 99.99999% : 30742.342us 00:30:17.731 00:30:17.731 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:17.731 ================================================================================= 00:30:17.731 1.00000% : 8400.524us 00:30:17.731 10.00000% : 9294.196us 00:30:17.731 25.00000% : 9592.087us 00:30:17.731 50.00000% : 10009.135us 00:30:17.731 75.00000% : 10426.182us 00:30:17.731 90.00000% : 11558.167us 00:30:17.731 95.00000% : 12332.684us 00:30:17.731 98.00000% : 13822.138us 00:30:17.731 99.00000% : 15490.327us 00:30:17.731 99.50000% : 21567.302us 00:30:17.731 99.90000% : 23235.491us 00:30:17.731 99.99000% : 23592.960us 00:30:17.731 99.99900% : 23592.960us 00:30:17.731 99.99990% : 23592.960us 00:30:17.731 99.99999% : 23592.960us 00:30:17.731 00:30:17.731 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:17.731 ============================================================================== 00:30:17.731 Range in us Cumulative IO count 00:30:17.731 7864.320 - 7923.898: 0.0567% ( 7) 00:30:17.731 7923.898 - 7983.476: 0.1457% ( 11) 00:30:17.731 7983.476 - 8043.055: 0.2753% ( 16) 00:30:17.731 8043.055 - 8102.633: 0.3886% ( 14) 00:30:17.731 8102.633 - 8162.211: 0.5505% ( 20) 00:30:17.731 8162.211 - 8221.789: 0.7286% ( 22) 00:30:17.731 8221.789 - 8281.367: 0.9310% ( 25) 00:30:17.731 8281.367 - 8340.945: 1.1091% ( 22) 00:30:17.731 8340.945 - 8400.524: 1.3034% ( 24) 00:30:17.731 8400.524 - 8460.102: 1.4977% ( 24) 00:30:17.731 8460.102 - 8519.680: 1.7325% ( 29) 00:30:17.731 8519.680 - 8579.258: 1.9430% ( 26) 00:30:17.731 8579.258 - 8638.836: 2.1940% ( 31) 00:30:17.731 8638.836 - 8698.415: 2.5745% ( 47) 00:30:17.731 8698.415 - 8757.993: 2.9712% ( 49) 00:30:17.731 8757.993 - 8817.571: 3.4084% ( 54) 00:30:17.731 8817.571 - 8877.149: 4.0965% ( 85) 00:30:17.731 8877.149 - 8936.727: 4.8899% ( 98) 00:30:17.731 8936.727 - 8996.305: 5.8452% ( 118) 00:30:17.731 8996.305 - 9055.884: 6.9867% ( 141) 00:30:17.731 9055.884 - 9115.462: 8.2659% ( 158) 00:30:17.731 9115.462 - 9175.040: 9.7312% ( 181) 00:30:17.731 9175.040 - 9234.618: 11.5933% ( 230) 00:30:17.731 9234.618 - 9294.196: 13.9734% ( 294) 00:30:17.731 9294.196 - 9353.775: 16.6613% ( 332) 00:30:17.731 9353.775 - 9413.353: 19.5272% ( 354) 00:30:17.731 9413.353 - 9472.931: 22.5389% ( 372) 00:30:17.731 9472.931 - 9532.509: 25.6477% ( 384) 00:30:17.731 9532.509 - 9592.087: 28.9589% ( 409) 00:30:17.731 9592.087 - 9651.665: 32.4158% ( 427) 00:30:17.731 9651.665 - 9711.244: 35.6460% ( 399) 00:30:17.731 9711.244 - 9770.822: 39.0868% ( 425) 00:30:17.731 9770.822 - 9830.400: 42.4223% ( 412) 00:30:17.731 9830.400 - 9889.978: 45.7902% ( 416) 00:30:17.731 9889.978 - 9949.556: 49.1985% ( 421) 00:30:17.731 9949.556 - 10009.135: 52.5178% ( 410) 00:30:17.731 10009.135 - 10068.713: 55.9424% ( 423) 00:30:17.731 10068.713 - 10128.291: 59.1807% ( 400) 00:30:17.731 10128.291 - 10187.869: 62.5000% ( 410) 00:30:17.731 10187.869 - 10247.447: 65.6979% ( 395) 00:30:17.731 10247.447 - 10307.025: 68.7905% ( 382) 00:30:17.731 10307.025 - 10366.604: 71.5593% ( 342) 00:30:17.731 10366.604 - 10426.182: 74.1580% ( 321) 00:30:17.731 10426.182 - 10485.760: 76.6435% ( 307) 00:30:17.731 10485.760 - 10545.338: 78.7403% ( 259) 00:30:17.731 10545.338 - 10604.916: 80.5942% ( 229) 00:30:17.731 10604.916 - 10664.495: 82.1729% ( 195) 00:30:17.731 10664.495 - 10724.073: 83.3873% ( 150) 00:30:17.731 10724.073 - 10783.651: 84.3264% ( 116) 00:30:17.731 10783.651 - 10843.229: 85.2008% ( 108) 00:30:17.731 10843.229 - 10902.807: 85.8808% ( 84) 00:30:17.731 10902.807 - 10962.385: 86.4799% ( 74) 00:30:17.731 10962.385 - 11021.964: 86.9414% ( 57) 00:30:17.731 11021.964 - 11081.542: 87.3219% ( 47) 00:30:17.732 11081.542 - 11141.120: 87.6214% ( 37) 00:30:17.732 11141.120 - 11200.698: 87.8886% ( 33) 00:30:17.732 11200.698 - 11260.276: 88.1962% ( 38) 00:30:17.732 11260.276 - 11319.855: 88.4958% ( 37) 00:30:17.732 11319.855 - 11379.433: 88.7953% ( 37) 00:30:17.732 11379.433 - 11439.011: 89.2163% ( 52) 00:30:17.732 11439.011 - 11498.589: 89.6049% ( 48) 00:30:17.732 11498.589 - 11558.167: 89.9773% ( 46) 00:30:17.732 11558.167 - 11617.745: 90.4631% ( 60) 00:30:17.732 11617.745 - 11677.324: 90.8760% ( 51) 00:30:17.732 11677.324 - 11736.902: 91.3131% ( 54) 00:30:17.732 11736.902 - 11796.480: 91.7341% ( 52) 00:30:17.732 11796.480 - 11856.058: 92.0823% ( 43) 00:30:17.732 11856.058 - 11915.636: 92.4142% ( 41) 00:30:17.732 11915.636 - 11975.215: 92.7461% ( 41) 00:30:17.732 11975.215 - 12034.793: 93.0942% ( 43) 00:30:17.732 12034.793 - 12094.371: 93.4505% ( 44) 00:30:17.732 12094.371 - 12153.949: 93.7419% ( 36) 00:30:17.732 12153.949 - 12213.527: 94.0981% ( 44) 00:30:17.732 12213.527 - 12273.105: 94.4462% ( 43) 00:30:17.732 12273.105 - 12332.684: 94.8106% ( 45) 00:30:17.732 12332.684 - 12392.262: 95.1668% ( 44) 00:30:17.732 12392.262 - 12451.840: 95.4987% ( 41) 00:30:17.732 12451.840 - 12511.418: 95.8144% ( 39) 00:30:17.732 12511.418 - 12570.996: 96.0492% ( 29) 00:30:17.732 12570.996 - 12630.575: 96.2273% ( 22) 00:30:17.732 12630.575 - 12690.153: 96.3488% ( 15) 00:30:17.732 12690.153 - 12749.731: 96.4621% ( 14) 00:30:17.732 12749.731 - 12809.309: 96.5188% ( 7) 00:30:17.732 12809.309 - 12868.887: 96.5674% ( 6) 00:30:17.732 12868.887 - 12928.465: 96.6240% ( 7) 00:30:17.732 12928.465 - 12988.044: 96.7374% ( 14) 00:30:17.732 12988.044 - 13047.622: 96.8507% ( 14) 00:30:17.732 13047.622 - 13107.200: 96.9722% ( 15) 00:30:17.732 13107.200 - 13166.778: 97.0936% ( 15) 00:30:17.732 13166.778 - 13226.356: 97.2069% ( 14) 00:30:17.732 13226.356 - 13285.935: 97.2879% ( 10) 00:30:17.732 13285.935 - 13345.513: 97.3931% ( 13) 00:30:17.732 13345.513 - 13405.091: 97.4903% ( 12) 00:30:17.732 13405.091 - 13464.669: 97.5874% ( 12) 00:30:17.732 13464.669 - 13524.247: 97.6684% ( 10) 00:30:17.732 13524.247 - 13583.825: 97.7413% ( 9) 00:30:17.732 13583.825 - 13643.404: 97.8141% ( 9) 00:30:17.732 13643.404 - 13702.982: 97.8870% ( 9) 00:30:17.732 13702.982 - 13762.560: 97.9841% ( 12) 00:30:17.732 13762.560 - 13822.138: 98.0651% ( 10) 00:30:17.732 13822.138 - 13881.716: 98.1460% ( 10) 00:30:17.732 13881.716 - 13941.295: 98.2432% ( 12) 00:30:17.732 13941.295 - 14000.873: 98.3484% ( 13) 00:30:17.732 14000.873 - 14060.451: 98.4375% ( 11) 00:30:17.732 14060.451 - 14120.029: 98.5266% ( 11) 00:30:17.732 14120.029 - 14179.607: 98.5670% ( 5) 00:30:17.732 14179.607 - 14239.185: 98.5913% ( 3) 00:30:17.732 14239.185 - 14298.764: 98.6237% ( 4) 00:30:17.732 14298.764 - 14358.342: 98.6480% ( 3) 00:30:17.732 14358.342 - 14417.920: 98.6642% ( 2) 00:30:17.732 14417.920 - 14477.498: 98.6966% ( 4) 00:30:17.732 14477.498 - 14537.076: 98.7209% ( 3) 00:30:17.732 14537.076 - 14596.655: 98.7532% ( 4) 00:30:17.732 14596.655 - 14656.233: 98.7775% ( 3) 00:30:17.732 14656.233 - 14715.811: 98.8018% ( 3) 00:30:17.732 14715.811 - 14775.389: 98.8423% ( 5) 00:30:17.732 14775.389 - 14834.967: 98.8666% ( 3) 00:30:17.732 14834.967 - 14894.545: 98.8909% ( 3) 00:30:17.732 14894.545 - 14954.124: 98.9071% ( 2) 00:30:17.732 14954.124 - 15013.702: 98.9313% ( 3) 00:30:17.732 15013.702 - 15073.280: 98.9394% ( 1) 00:30:17.732 15073.280 - 15132.858: 98.9637% ( 3) 00:30:17.732 31457.280 - 31695.593: 99.0204% ( 7) 00:30:17.732 31695.593 - 31933.905: 99.0690% ( 6) 00:30:17.732 31933.905 - 32172.218: 99.1256% ( 7) 00:30:17.732 32172.218 - 32410.531: 99.1904% ( 8) 00:30:17.732 32410.531 - 32648.844: 99.2390% ( 6) 00:30:17.732 32648.844 - 32887.156: 99.2957% ( 7) 00:30:17.732 32887.156 - 33125.469: 99.3604% ( 8) 00:30:17.732 33125.469 - 33363.782: 99.4090% ( 6) 00:30:17.732 33363.782 - 33602.095: 99.4657% ( 7) 00:30:17.732 33602.095 - 33840.407: 99.4819% ( 2) 00:30:17.732 37653.411 - 37891.724: 99.5223% ( 5) 00:30:17.732 37891.724 - 38130.036: 99.5871% ( 8) 00:30:17.732 38130.036 - 38368.349: 99.6357% ( 6) 00:30:17.732 38368.349 - 38606.662: 99.6924% ( 7) 00:30:17.732 38606.662 - 38844.975: 99.7490% ( 7) 00:30:17.732 38844.975 - 39083.287: 99.8138% ( 8) 00:30:17.732 39083.287 - 39321.600: 99.8705% ( 7) 00:30:17.732 39321.600 - 39559.913: 99.9352% ( 8) 00:30:17.732 39559.913 - 39798.225: 99.9919% ( 7) 00:30:17.732 39798.225 - 40036.538: 100.0000% ( 1) 00:30:17.732 00:30:17.732 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:17.732 ============================================================================== 00:30:17.732 Range in us Cumulative IO count 00:30:17.732 7923.898 - 7983.476: 0.0081% ( 1) 00:30:17.732 7983.476 - 8043.055: 0.0648% ( 7) 00:30:17.732 8043.055 - 8102.633: 0.1943% ( 16) 00:30:17.732 8102.633 - 8162.211: 0.3238% ( 16) 00:30:17.732 8162.211 - 8221.789: 0.5100% ( 23) 00:30:17.732 8221.789 - 8281.367: 0.6801% ( 21) 00:30:17.732 8281.367 - 8340.945: 0.8744% ( 24) 00:30:17.732 8340.945 - 8400.524: 1.1010% ( 28) 00:30:17.732 8400.524 - 8460.102: 1.3520% ( 31) 00:30:17.732 8460.102 - 8519.680: 1.5706% ( 27) 00:30:17.732 8519.680 - 8579.258: 1.7730% ( 25) 00:30:17.732 8579.258 - 8638.836: 2.0563% ( 35) 00:30:17.732 8638.836 - 8698.415: 2.3397% ( 35) 00:30:17.732 8698.415 - 8757.993: 2.6797% ( 42) 00:30:17.732 8757.993 - 8817.571: 3.0926% ( 51) 00:30:17.732 8817.571 - 8877.149: 3.5946% ( 62) 00:30:17.732 8877.149 - 8936.727: 4.1694% ( 71) 00:30:17.732 8936.727 - 8996.305: 4.8980% ( 90) 00:30:17.732 8996.305 - 9055.884: 5.7481% ( 105) 00:30:17.732 9055.884 - 9115.462: 6.8410% ( 135) 00:30:17.732 9115.462 - 9175.040: 8.0554% ( 150) 00:30:17.732 9175.040 - 9234.618: 9.5288% ( 182) 00:30:17.732 9234.618 - 9294.196: 11.2451% ( 212) 00:30:17.732 9294.196 - 9353.775: 13.4310% ( 270) 00:30:17.732 9353.775 - 9413.353: 16.0865% ( 328) 00:30:17.732 9413.353 - 9472.931: 19.0900% ( 371) 00:30:17.732 9472.931 - 9532.509: 22.4822% ( 419) 00:30:17.732 9532.509 - 9592.087: 26.0848% ( 445) 00:30:17.732 9592.087 - 9651.665: 29.8818% ( 469) 00:30:17.732 9651.665 - 9711.244: 33.7921% ( 483) 00:30:17.732 9711.244 - 9770.822: 37.8724% ( 504) 00:30:17.732 9770.822 - 9830.400: 41.9284% ( 501) 00:30:17.732 9830.400 - 9889.978: 45.9602% ( 498) 00:30:17.732 9889.978 - 9949.556: 50.0000% ( 499) 00:30:17.732 9949.556 - 10009.135: 53.9427% ( 487) 00:30:17.732 10009.135 - 10068.713: 57.6830% ( 462) 00:30:17.732 10068.713 - 10128.291: 61.4233% ( 462) 00:30:17.732 10128.291 - 10187.869: 64.9045% ( 430) 00:30:17.732 10187.869 - 10247.447: 68.2562% ( 414) 00:30:17.732 10247.447 - 10307.025: 71.3731% ( 385) 00:30:17.732 10307.025 - 10366.604: 74.2228% ( 352) 00:30:17.732 10366.604 - 10426.182: 76.8297% ( 322) 00:30:17.732 10426.182 - 10485.760: 78.8779% ( 253) 00:30:17.732 10485.760 - 10545.338: 80.6671% ( 221) 00:30:17.732 10545.338 - 10604.916: 82.0677% ( 173) 00:30:17.732 10604.916 - 10664.495: 83.1525% ( 134) 00:30:17.732 10664.495 - 10724.073: 84.1645% ( 125) 00:30:17.732 10724.073 - 10783.651: 84.9822% ( 101) 00:30:17.732 10783.651 - 10843.229: 85.6137% ( 78) 00:30:17.732 10843.229 - 10902.807: 86.2128% ( 74) 00:30:17.732 10902.807 - 10962.385: 86.6256% ( 51) 00:30:17.732 10962.385 - 11021.964: 86.9900% ( 45) 00:30:17.732 11021.964 - 11081.542: 87.3462% ( 44) 00:30:17.732 11081.542 - 11141.120: 87.6376% ( 36) 00:30:17.732 11141.120 - 11200.698: 87.9210% ( 35) 00:30:17.732 11200.698 - 11260.276: 88.2286% ( 38) 00:30:17.732 11260.276 - 11319.855: 88.4958% ( 33) 00:30:17.732 11319.855 - 11379.433: 88.7872% ( 36) 00:30:17.732 11379.433 - 11439.011: 89.1354% ( 43) 00:30:17.732 11439.011 - 11498.589: 89.5078% ( 46) 00:30:17.732 11498.589 - 11558.167: 89.8721% ( 45) 00:30:17.732 11558.167 - 11617.745: 90.2526% ( 47) 00:30:17.732 11617.745 - 11677.324: 90.6088% ( 44) 00:30:17.732 11677.324 - 11736.902: 91.0055% ( 49) 00:30:17.732 11736.902 - 11796.480: 91.4346% ( 53) 00:30:17.732 11796.480 - 11856.058: 91.8960% ( 57) 00:30:17.732 11856.058 - 11915.636: 92.3413% ( 55) 00:30:17.732 11915.636 - 11975.215: 92.7623% ( 52) 00:30:17.732 11975.215 - 12034.793: 93.1590% ( 49) 00:30:17.732 12034.793 - 12094.371: 93.5800% ( 52) 00:30:17.732 12094.371 - 12153.949: 93.9848% ( 50) 00:30:17.732 12153.949 - 12213.527: 94.3815% ( 49) 00:30:17.732 12213.527 - 12273.105: 94.7377% ( 44) 00:30:17.732 12273.105 - 12332.684: 95.0939% ( 44) 00:30:17.732 12332.684 - 12392.262: 95.4177% ( 40) 00:30:17.732 12392.262 - 12451.840: 95.7092% ( 36) 00:30:17.732 12451.840 - 12511.418: 95.9197% ( 26) 00:30:17.732 12511.418 - 12570.996: 96.0816% ( 20) 00:30:17.732 12570.996 - 12630.575: 96.2516% ( 21) 00:30:17.732 12630.575 - 12690.153: 96.3892% ( 17) 00:30:17.732 12690.153 - 12749.731: 96.5188% ( 16) 00:30:17.732 12749.731 - 12809.309: 96.5835% ( 8) 00:30:17.732 12809.309 - 12868.887: 96.6483% ( 8) 00:30:17.732 12868.887 - 12928.465: 96.7374% ( 11) 00:30:17.732 12928.465 - 12988.044: 96.8345% ( 12) 00:30:17.732 12988.044 - 13047.622: 96.9398% ( 13) 00:30:17.732 13047.622 - 13107.200: 97.0531% ( 14) 00:30:17.732 13107.200 - 13166.778: 97.1665% ( 14) 00:30:17.732 13166.778 - 13226.356: 97.2879% ( 15) 00:30:17.732 13226.356 - 13285.935: 97.4012% ( 14) 00:30:17.732 13285.935 - 13345.513: 97.5227% ( 15) 00:30:17.732 13345.513 - 13405.091: 97.6522% ( 16) 00:30:17.732 13405.091 - 13464.669: 97.7574% ( 13) 00:30:17.732 13464.669 - 13524.247: 97.8870% ( 16) 00:30:17.732 13524.247 - 13583.825: 98.0084% ( 15) 00:30:17.732 13583.825 - 13643.404: 98.1056% ( 12) 00:30:17.732 13643.404 - 13702.982: 98.1784% ( 9) 00:30:17.732 13702.982 - 13762.560: 98.2432% ( 8) 00:30:17.732 13762.560 - 13822.138: 98.3080% ( 8) 00:30:17.732 13822.138 - 13881.716: 98.3565% ( 6) 00:30:17.732 13881.716 - 13941.295: 98.3889% ( 4) 00:30:17.732 13941.295 - 14000.873: 98.4213% ( 4) 00:30:17.732 14000.873 - 14060.451: 98.4456% ( 3) 00:30:17.732 14179.607 - 14239.185: 98.4537% ( 1) 00:30:17.732 14239.185 - 14298.764: 98.4861% ( 4) 00:30:17.732 14298.764 - 14358.342: 98.5185% ( 4) 00:30:17.732 14358.342 - 14417.920: 98.5347% ( 2) 00:30:17.732 14417.920 - 14477.498: 98.5670% ( 4) 00:30:17.732 14477.498 - 14537.076: 98.5994% ( 4) 00:30:17.733 14537.076 - 14596.655: 98.6318% ( 4) 00:30:17.733 14596.655 - 14656.233: 98.6642% ( 4) 00:30:17.733 14656.233 - 14715.811: 98.6966% ( 4) 00:30:17.733 14715.811 - 14775.389: 98.7290% ( 4) 00:30:17.733 14775.389 - 14834.967: 98.7613% ( 4) 00:30:17.733 14834.967 - 14894.545: 98.7937% ( 4) 00:30:17.733 14894.545 - 14954.124: 98.8342% ( 5) 00:30:17.733 14954.124 - 15013.702: 98.8585% ( 3) 00:30:17.733 15013.702 - 15073.280: 98.8747% ( 2) 00:30:17.733 15073.280 - 15132.858: 98.9071% ( 4) 00:30:17.733 15132.858 - 15192.436: 98.9394% ( 4) 00:30:17.733 15192.436 - 15252.015: 98.9637% ( 3) 00:30:17.733 29193.309 - 29312.465: 98.9880% ( 3) 00:30:17.733 29312.465 - 29431.622: 99.0204% ( 4) 00:30:17.733 29431.622 - 29550.778: 99.0447% ( 3) 00:30:17.733 29550.778 - 29669.935: 99.0771% ( 4) 00:30:17.733 29669.935 - 29789.091: 99.1095% ( 4) 00:30:17.733 29789.091 - 29908.247: 99.1418% ( 4) 00:30:17.733 29908.247 - 30027.404: 99.1742% ( 4) 00:30:17.733 30027.404 - 30146.560: 99.2066% ( 4) 00:30:17.733 30146.560 - 30265.716: 99.2390% ( 4) 00:30:17.733 30265.716 - 30384.873: 99.2714% ( 4) 00:30:17.733 30384.873 - 30504.029: 99.2957% ( 3) 00:30:17.733 30504.029 - 30742.342: 99.3523% ( 7) 00:30:17.733 30742.342 - 30980.655: 99.4171% ( 8) 00:30:17.733 30980.655 - 31218.967: 99.4819% ( 8) 00:30:17.733 35270.284 - 35508.596: 99.5062% ( 3) 00:30:17.733 35508.596 - 35746.909: 99.5709% ( 8) 00:30:17.733 35746.909 - 35985.222: 99.6276% ( 7) 00:30:17.733 35985.222 - 36223.535: 99.6843% ( 7) 00:30:17.733 36223.535 - 36461.847: 99.7490% ( 8) 00:30:17.733 36461.847 - 36700.160: 99.8138% ( 8) 00:30:17.733 36700.160 - 36938.473: 99.8705% ( 7) 00:30:17.733 36938.473 - 37176.785: 99.9352% ( 8) 00:30:17.733 37176.785 - 37415.098: 100.0000% ( 8) 00:30:17.733 00:30:17.733 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:17.733 ============================================================================== 00:30:17.733 Range in us Cumulative IO count 00:30:17.733 7983.476 - 8043.055: 0.0486% ( 6) 00:30:17.733 8043.055 - 8102.633: 0.1781% ( 16) 00:30:17.733 8102.633 - 8162.211: 0.3238% ( 18) 00:30:17.733 8162.211 - 8221.789: 0.4938% ( 21) 00:30:17.733 8221.789 - 8281.367: 0.6639% ( 21) 00:30:17.733 8281.367 - 8340.945: 0.8905% ( 28) 00:30:17.733 8340.945 - 8400.524: 1.0929% ( 25) 00:30:17.733 8400.524 - 8460.102: 1.2953% ( 25) 00:30:17.733 8460.102 - 8519.680: 1.5301% ( 29) 00:30:17.733 8519.680 - 8579.258: 1.7973% ( 33) 00:30:17.733 8579.258 - 8638.836: 2.0725% ( 34) 00:30:17.733 8638.836 - 8698.415: 2.4045% ( 41) 00:30:17.733 8698.415 - 8757.993: 2.7769% ( 46) 00:30:17.733 8757.993 - 8817.571: 3.1898% ( 51) 00:30:17.733 8817.571 - 8877.149: 3.6431% ( 56) 00:30:17.733 8877.149 - 8936.727: 4.2341% ( 73) 00:30:17.733 8936.727 - 8996.305: 4.8980% ( 82) 00:30:17.733 8996.305 - 9055.884: 5.7885% ( 110) 00:30:17.733 9055.884 - 9115.462: 6.8734% ( 134) 00:30:17.733 9115.462 - 9175.040: 8.1768% ( 161) 00:30:17.733 9175.040 - 9234.618: 9.6341% ( 180) 00:30:17.733 9234.618 - 9294.196: 11.4152% ( 220) 00:30:17.733 9294.196 - 9353.775: 13.5687% ( 266) 00:30:17.733 9353.775 - 9413.353: 16.1431% ( 318) 00:30:17.733 9413.353 - 9472.931: 19.2034% ( 378) 00:30:17.733 9472.931 - 9532.509: 22.3850% ( 393) 00:30:17.733 9532.509 - 9592.087: 25.9796% ( 444) 00:30:17.733 9592.087 - 9651.665: 29.8251% ( 475) 00:30:17.733 9651.665 - 9711.244: 33.8407% ( 496) 00:30:17.733 9711.244 - 9770.822: 37.8481% ( 495) 00:30:17.733 9770.822 - 9830.400: 42.0256% ( 516) 00:30:17.733 9830.400 - 9889.978: 46.1869% ( 514) 00:30:17.733 9889.978 - 9949.556: 50.2024% ( 496) 00:30:17.733 9949.556 - 10009.135: 54.1613% ( 489) 00:30:17.733 10009.135 - 10068.713: 58.0635% ( 482) 00:30:17.733 10068.713 - 10128.291: 61.7957% ( 461) 00:30:17.733 10128.291 - 10187.869: 65.4631% ( 453) 00:30:17.733 10187.869 - 10247.447: 68.8876% ( 423) 00:30:17.733 10247.447 - 10307.025: 71.9883% ( 383) 00:30:17.733 10307.025 - 10366.604: 74.9028% ( 360) 00:30:17.733 10366.604 - 10426.182: 77.5178% ( 323) 00:30:17.733 10426.182 - 10485.760: 79.7280% ( 273) 00:30:17.733 10485.760 - 10545.338: 81.5334% ( 223) 00:30:17.733 10545.338 - 10604.916: 82.9420% ( 174) 00:30:17.733 10604.916 - 10664.495: 84.0269% ( 134) 00:30:17.733 10664.495 - 10724.073: 84.9741% ( 117) 00:30:17.733 10724.073 - 10783.651: 85.7432% ( 95) 00:30:17.733 10783.651 - 10843.229: 86.3261% ( 72) 00:30:17.733 10843.229 - 10902.807: 86.7714% ( 55) 00:30:17.733 10902.807 - 10962.385: 87.1357% ( 45) 00:30:17.733 10962.385 - 11021.964: 87.3786% ( 30) 00:30:17.733 11021.964 - 11081.542: 87.5810% ( 25) 00:30:17.733 11081.542 - 11141.120: 87.7348% ( 19) 00:30:17.733 11141.120 - 11200.698: 87.8886% ( 19) 00:30:17.733 11200.698 - 11260.276: 88.0505% ( 20) 00:30:17.733 11260.276 - 11319.855: 88.2853% ( 29) 00:30:17.733 11319.855 - 11379.433: 88.5363% ( 31) 00:30:17.733 11379.433 - 11439.011: 88.8358% ( 37) 00:30:17.733 11439.011 - 11498.589: 89.1839% ( 43) 00:30:17.733 11498.589 - 11558.167: 89.4997% ( 39) 00:30:17.733 11558.167 - 11617.745: 89.7992% ( 37) 00:30:17.733 11617.745 - 11677.324: 90.1716% ( 46) 00:30:17.733 11677.324 - 11736.902: 90.5602% ( 48) 00:30:17.733 11736.902 - 11796.480: 90.9812% ( 52) 00:30:17.733 11796.480 - 11856.058: 91.3860% ( 50) 00:30:17.733 11856.058 - 11915.636: 91.8637% ( 59) 00:30:17.733 11915.636 - 11975.215: 92.3008% ( 54) 00:30:17.733 11975.215 - 12034.793: 92.7299% ( 53) 00:30:17.733 12034.793 - 12094.371: 93.1914% ( 57) 00:30:17.733 12094.371 - 12153.949: 93.6528% ( 57) 00:30:17.733 12153.949 - 12213.527: 94.0900% ( 54) 00:30:17.733 12213.527 - 12273.105: 94.5110% ( 52) 00:30:17.733 12273.105 - 12332.684: 94.9320% ( 52) 00:30:17.733 12332.684 - 12392.262: 95.2882% ( 44) 00:30:17.733 12392.262 - 12451.840: 95.5797% ( 36) 00:30:17.733 12451.840 - 12511.418: 95.8549% ( 34) 00:30:17.733 12511.418 - 12570.996: 96.0735% ( 27) 00:30:17.733 12570.996 - 12630.575: 96.2678% ( 24) 00:30:17.733 12630.575 - 12690.153: 96.4378% ( 21) 00:30:17.733 12690.153 - 12749.731: 96.5835% ( 18) 00:30:17.733 12749.731 - 12809.309: 96.7212% ( 17) 00:30:17.733 12809.309 - 12868.887: 96.8183% ( 12) 00:30:17.733 12868.887 - 12928.465: 96.8993% ( 10) 00:30:17.733 12928.465 - 12988.044: 96.9964% ( 12) 00:30:17.733 12988.044 - 13047.622: 97.0450% ( 6) 00:30:17.733 13047.622 - 13107.200: 97.0855% ( 5) 00:30:17.733 13107.200 - 13166.778: 97.1665% ( 10) 00:30:17.733 13166.778 - 13226.356: 97.2636% ( 12) 00:30:17.733 13226.356 - 13285.935: 97.3850% ( 15) 00:30:17.733 13285.935 - 13345.513: 97.4660% ( 10) 00:30:17.733 13345.513 - 13405.091: 97.5631% ( 12) 00:30:17.733 13405.091 - 13464.669: 97.6441% ( 10) 00:30:17.733 13464.669 - 13524.247: 97.7413% ( 12) 00:30:17.733 13524.247 - 13583.825: 97.8384% ( 12) 00:30:17.733 13583.825 - 13643.404: 97.9356% ( 12) 00:30:17.733 13643.404 - 13702.982: 98.0327% ( 12) 00:30:17.733 13702.982 - 13762.560: 98.1137% ( 10) 00:30:17.733 13762.560 - 13822.138: 98.1784% ( 8) 00:30:17.733 13822.138 - 13881.716: 98.2432% ( 8) 00:30:17.733 13881.716 - 13941.295: 98.3080% ( 8) 00:30:17.733 13941.295 - 14000.873: 98.3727% ( 8) 00:30:17.733 14000.873 - 14060.451: 98.4375% ( 8) 00:30:17.733 14060.451 - 14120.029: 98.4456% ( 1) 00:30:17.733 14179.607 - 14239.185: 98.4780% ( 4) 00:30:17.733 14239.185 - 14298.764: 98.5185% ( 5) 00:30:17.733 14298.764 - 14358.342: 98.5427% ( 3) 00:30:17.733 14358.342 - 14417.920: 98.5751% ( 4) 00:30:17.733 14417.920 - 14477.498: 98.6075% ( 4) 00:30:17.733 14477.498 - 14537.076: 98.6399% ( 4) 00:30:17.733 14537.076 - 14596.655: 98.6561% ( 2) 00:30:17.733 14596.655 - 14656.233: 98.6885% ( 4) 00:30:17.733 14656.233 - 14715.811: 98.7128% ( 3) 00:30:17.733 14715.811 - 14775.389: 98.7451% ( 4) 00:30:17.733 14775.389 - 14834.967: 98.7694% ( 3) 00:30:17.733 14834.967 - 14894.545: 98.7937% ( 3) 00:30:17.733 14894.545 - 14954.124: 98.8180% ( 3) 00:30:17.733 14954.124 - 15013.702: 98.8504% ( 4) 00:30:17.733 15013.702 - 15073.280: 98.8747% ( 3) 00:30:17.733 15073.280 - 15132.858: 98.9071% ( 4) 00:30:17.733 15132.858 - 15192.436: 98.9313% ( 3) 00:30:17.733 15192.436 - 15252.015: 98.9556% ( 3) 00:30:17.733 15252.015 - 15371.171: 98.9637% ( 1) 00:30:17.733 27286.807 - 27405.964: 98.9880% ( 3) 00:30:17.733 27405.964 - 27525.120: 99.0204% ( 4) 00:30:17.733 27525.120 - 27644.276: 99.0528% ( 4) 00:30:17.733 27644.276 - 27763.433: 99.0852% ( 4) 00:30:17.733 27763.433 - 27882.589: 99.1095% ( 3) 00:30:17.733 27882.589 - 28001.745: 99.1418% ( 4) 00:30:17.733 28001.745 - 28120.902: 99.1742% ( 4) 00:30:17.733 28120.902 - 28240.058: 99.2066% ( 4) 00:30:17.733 28240.058 - 28359.215: 99.2390% ( 4) 00:30:17.733 28359.215 - 28478.371: 99.2633% ( 3) 00:30:17.733 28478.371 - 28597.527: 99.2957% ( 4) 00:30:17.733 28597.527 - 28716.684: 99.3280% ( 4) 00:30:17.733 28716.684 - 28835.840: 99.3523% ( 3) 00:30:17.733 28835.840 - 28954.996: 99.3847% ( 4) 00:30:17.733 28954.996 - 29074.153: 99.4171% ( 4) 00:30:17.733 29074.153 - 29193.309: 99.4495% ( 4) 00:30:17.733 29193.309 - 29312.465: 99.4819% ( 4) 00:30:17.733 33363.782 - 33602.095: 99.5142% ( 4) 00:30:17.733 33602.095 - 33840.407: 99.5790% ( 8) 00:30:17.733 33840.407 - 34078.720: 99.6357% ( 7) 00:30:17.733 34078.720 - 34317.033: 99.6924% ( 7) 00:30:17.733 34317.033 - 34555.345: 99.7490% ( 7) 00:30:17.733 34555.345 - 34793.658: 99.8138% ( 8) 00:30:17.733 34793.658 - 35031.971: 99.8705% ( 7) 00:30:17.733 35031.971 - 35270.284: 99.9271% ( 7) 00:30:17.733 35270.284 - 35508.596: 99.9919% ( 8) 00:30:17.733 35508.596 - 35746.909: 100.0000% ( 1) 00:30:17.733 00:30:17.733 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:17.733 ============================================================================== 00:30:17.733 Range in us Cumulative IO count 00:30:17.733 7983.476 - 8043.055: 0.0405% ( 5) 00:30:17.733 8043.055 - 8102.633: 0.1862% ( 18) 00:30:17.733 8102.633 - 8162.211: 0.3238% ( 17) 00:30:17.733 8162.211 - 8221.789: 0.4938% ( 21) 00:30:17.733 8221.789 - 8281.367: 0.7286% ( 29) 00:30:17.733 8281.367 - 8340.945: 0.9310% ( 25) 00:30:17.733 8340.945 - 8400.524: 1.1415% ( 26) 00:30:17.734 8400.524 - 8460.102: 1.3682% ( 28) 00:30:17.734 8460.102 - 8519.680: 1.5706% ( 25) 00:30:17.734 8519.680 - 8579.258: 1.7892% ( 27) 00:30:17.734 8579.258 - 8638.836: 2.0725% ( 35) 00:30:17.734 8638.836 - 8698.415: 2.4126% ( 42) 00:30:17.734 8698.415 - 8757.993: 2.8012% ( 48) 00:30:17.734 8757.993 - 8817.571: 3.2141% ( 51) 00:30:17.734 8817.571 - 8877.149: 3.6836% ( 58) 00:30:17.734 8877.149 - 8936.727: 4.2827% ( 74) 00:30:17.734 8936.727 - 8996.305: 5.0356% ( 93) 00:30:17.734 8996.305 - 9055.884: 5.9181% ( 109) 00:30:17.734 9055.884 - 9115.462: 6.9543% ( 128) 00:30:17.734 9115.462 - 9175.040: 8.1282% ( 145) 00:30:17.734 9175.040 - 9234.618: 9.5207% ( 172) 00:30:17.734 9234.618 - 9294.196: 11.2047% ( 208) 00:30:17.734 9294.196 - 9353.775: 13.3339% ( 263) 00:30:17.734 9353.775 - 9413.353: 15.8193% ( 307) 00:30:17.734 9413.353 - 9472.931: 18.7824% ( 366) 00:30:17.734 9472.931 - 9532.509: 22.2069% ( 423) 00:30:17.734 9532.509 - 9592.087: 25.7124% ( 433) 00:30:17.734 9592.087 - 9651.665: 29.5984% ( 480) 00:30:17.734 9651.665 - 9711.244: 33.5573% ( 489) 00:30:17.734 9711.244 - 9770.822: 37.5891% ( 498) 00:30:17.734 9770.822 - 9830.400: 41.6937% ( 507) 00:30:17.734 9830.400 - 9889.978: 45.7497% ( 501) 00:30:17.734 9889.978 - 9949.556: 49.6681% ( 484) 00:30:17.734 9949.556 - 10009.135: 53.6755% ( 495) 00:30:17.734 10009.135 - 10068.713: 57.6668% ( 493) 00:30:17.734 10068.713 - 10128.291: 61.4071% ( 462) 00:30:17.734 10128.291 - 10187.869: 65.0745% ( 453) 00:30:17.734 10187.869 - 10247.447: 68.5476% ( 429) 00:30:17.734 10247.447 - 10307.025: 71.6969% ( 389) 00:30:17.734 10307.025 - 10366.604: 74.4495% ( 340) 00:30:17.734 10366.604 - 10426.182: 77.2506% ( 346) 00:30:17.734 10426.182 - 10485.760: 79.5742% ( 287) 00:30:17.734 10485.760 - 10545.338: 81.4200% ( 228) 00:30:17.734 10545.338 - 10604.916: 82.9501% ( 189) 00:30:17.734 10604.916 - 10664.495: 84.0593% ( 137) 00:30:17.734 10664.495 - 10724.073: 85.0065% ( 117) 00:30:17.734 10724.073 - 10783.651: 85.7837% ( 96) 00:30:17.734 10783.651 - 10843.229: 86.3909% ( 75) 00:30:17.734 10843.229 - 10902.807: 86.8038% ( 51) 00:30:17.734 10902.807 - 10962.385: 87.1519% ( 43) 00:30:17.734 10962.385 - 11021.964: 87.4514% ( 37) 00:30:17.734 11021.964 - 11081.542: 87.7024% ( 31) 00:30:17.734 11081.542 - 11141.120: 87.8805% ( 22) 00:30:17.734 11141.120 - 11200.698: 88.0505% ( 21) 00:30:17.734 11200.698 - 11260.276: 88.2205% ( 21) 00:30:17.734 11260.276 - 11319.855: 88.4067% ( 23) 00:30:17.734 11319.855 - 11379.433: 88.6253% ( 27) 00:30:17.734 11379.433 - 11439.011: 88.8601% ( 29) 00:30:17.734 11439.011 - 11498.589: 89.0787% ( 27) 00:30:17.734 11498.589 - 11558.167: 89.4025% ( 40) 00:30:17.734 11558.167 - 11617.745: 89.7264% ( 40) 00:30:17.734 11617.745 - 11677.324: 90.1473% ( 52) 00:30:17.734 11677.324 - 11736.902: 90.5359% ( 48) 00:30:17.734 11736.902 - 11796.480: 90.9488% ( 51) 00:30:17.734 11796.480 - 11856.058: 91.4589% ( 63) 00:30:17.734 11856.058 - 11915.636: 91.9284% ( 58) 00:30:17.734 11915.636 - 11975.215: 92.3899% ( 57) 00:30:17.734 11975.215 - 12034.793: 92.8837% ( 61) 00:30:17.734 12034.793 - 12094.371: 93.3452% ( 57) 00:30:17.734 12094.371 - 12153.949: 93.8391% ( 61) 00:30:17.734 12153.949 - 12213.527: 94.3005% ( 57) 00:30:17.734 12213.527 - 12273.105: 94.7458% ( 55) 00:30:17.734 12273.105 - 12332.684: 95.1668% ( 52) 00:30:17.734 12332.684 - 12392.262: 95.5554% ( 48) 00:30:17.734 12392.262 - 12451.840: 95.9035% ( 43) 00:30:17.734 12451.840 - 12511.418: 96.1626% ( 32) 00:30:17.734 12511.418 - 12570.996: 96.3731% ( 26) 00:30:17.734 12570.996 - 12630.575: 96.5431% ( 21) 00:30:17.734 12630.575 - 12690.153: 96.6645% ( 15) 00:30:17.734 12690.153 - 12749.731: 96.7293% ( 8) 00:30:17.734 12749.731 - 12809.309: 96.7778% ( 6) 00:30:17.734 12809.309 - 12868.887: 96.8021% ( 3) 00:30:17.734 12868.887 - 12928.465: 96.8669% ( 8) 00:30:17.734 12928.465 - 12988.044: 96.9317% ( 8) 00:30:17.734 12988.044 - 13047.622: 97.0045% ( 9) 00:30:17.734 13047.622 - 13107.200: 97.0774% ( 9) 00:30:17.734 13107.200 - 13166.778: 97.1584% ( 10) 00:30:17.734 13166.778 - 13226.356: 97.2312% ( 9) 00:30:17.734 13226.356 - 13285.935: 97.3041% ( 9) 00:30:17.734 13285.935 - 13345.513: 97.3931% ( 11) 00:30:17.734 13345.513 - 13405.091: 97.4822% ( 11) 00:30:17.734 13405.091 - 13464.669: 97.5793% ( 12) 00:30:17.734 13464.669 - 13524.247: 97.6765% ( 12) 00:30:17.734 13524.247 - 13583.825: 97.7655% ( 11) 00:30:17.734 13583.825 - 13643.404: 97.8546% ( 11) 00:30:17.734 13643.404 - 13702.982: 97.9437% ( 11) 00:30:17.734 13702.982 - 13762.560: 98.0327% ( 11) 00:30:17.734 13762.560 - 13822.138: 98.1299% ( 12) 00:30:17.734 13822.138 - 13881.716: 98.2027% ( 9) 00:30:17.734 13881.716 - 13941.295: 98.2675% ( 8) 00:30:17.734 13941.295 - 14000.873: 98.3080% ( 5) 00:30:17.734 14000.873 - 14060.451: 98.3403% ( 4) 00:30:17.734 14060.451 - 14120.029: 98.3727% ( 4) 00:30:17.734 14120.029 - 14179.607: 98.3970% ( 3) 00:30:17.734 14179.607 - 14239.185: 98.4456% ( 6) 00:30:17.734 14239.185 - 14298.764: 98.5023% ( 7) 00:30:17.734 14298.764 - 14358.342: 98.5266% ( 3) 00:30:17.734 14358.342 - 14417.920: 98.5589% ( 4) 00:30:17.734 14417.920 - 14477.498: 98.5913% ( 4) 00:30:17.734 14477.498 - 14537.076: 98.6156% ( 3) 00:30:17.734 14537.076 - 14596.655: 98.6480% ( 4) 00:30:17.734 14596.655 - 14656.233: 98.6804% ( 4) 00:30:17.734 14656.233 - 14715.811: 98.7128% ( 4) 00:30:17.734 14715.811 - 14775.389: 98.7451% ( 4) 00:30:17.734 14775.389 - 14834.967: 98.7775% ( 4) 00:30:17.734 14834.967 - 14894.545: 98.8099% ( 4) 00:30:17.734 14894.545 - 14954.124: 98.8504% ( 5) 00:30:17.734 14954.124 - 15013.702: 98.8747% ( 3) 00:30:17.734 15013.702 - 15073.280: 98.9071% ( 4) 00:30:17.734 15073.280 - 15132.858: 98.9475% ( 5) 00:30:17.734 15132.858 - 15192.436: 98.9637% ( 2) 00:30:17.734 24784.524 - 24903.680: 98.9718% ( 1) 00:30:17.734 24903.680 - 25022.836: 99.0042% ( 4) 00:30:17.734 25022.836 - 25141.993: 99.0285% ( 3) 00:30:17.734 25141.993 - 25261.149: 99.0609% ( 4) 00:30:17.734 25261.149 - 25380.305: 99.0933% ( 4) 00:30:17.734 25380.305 - 25499.462: 99.1176% ( 3) 00:30:17.734 25499.462 - 25618.618: 99.1499% ( 4) 00:30:17.734 25618.618 - 25737.775: 99.1823% ( 4) 00:30:17.734 25737.775 - 25856.931: 99.2147% ( 4) 00:30:17.734 25856.931 - 25976.087: 99.2390% ( 3) 00:30:17.734 25976.087 - 26095.244: 99.2714% ( 4) 00:30:17.734 26095.244 - 26214.400: 99.3038% ( 4) 00:30:17.734 26214.400 - 26333.556: 99.3361% ( 4) 00:30:17.734 26333.556 - 26452.713: 99.3685% ( 4) 00:30:17.734 26452.713 - 26571.869: 99.3928% ( 3) 00:30:17.734 26571.869 - 26691.025: 99.4252% ( 4) 00:30:17.734 26691.025 - 26810.182: 99.4495% ( 3) 00:30:17.734 26810.182 - 26929.338: 99.4819% ( 4) 00:30:17.734 30980.655 - 31218.967: 99.5223% ( 5) 00:30:17.734 31218.967 - 31457.280: 99.5871% ( 8) 00:30:17.734 31457.280 - 31695.593: 99.6519% ( 8) 00:30:17.734 31695.593 - 31933.905: 99.7085% ( 7) 00:30:17.734 31933.905 - 32172.218: 99.7652% ( 7) 00:30:17.734 32172.218 - 32410.531: 99.8219% ( 7) 00:30:17.734 32410.531 - 32648.844: 99.8867% ( 8) 00:30:17.734 32648.844 - 32887.156: 99.9433% ( 7) 00:30:17.734 32887.156 - 33125.469: 100.0000% ( 7) 00:30:17.734 00:30:17.734 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:17.734 ============================================================================== 00:30:17.734 Range in us Cumulative IO count 00:30:17.734 7983.476 - 8043.055: 0.0567% ( 7) 00:30:17.734 8043.055 - 8102.633: 0.1538% ( 12) 00:30:17.734 8102.633 - 8162.211: 0.2995% ( 18) 00:30:17.734 8162.211 - 8221.789: 0.4696% ( 21) 00:30:17.734 8221.789 - 8281.367: 0.6881% ( 27) 00:30:17.734 8281.367 - 8340.945: 0.8824% ( 24) 00:30:17.734 8340.945 - 8400.524: 1.1253% ( 30) 00:30:17.734 8400.524 - 8460.102: 1.3439% ( 27) 00:30:17.734 8460.102 - 8519.680: 1.5787% ( 29) 00:30:17.734 8519.680 - 8579.258: 1.8216% ( 30) 00:30:17.734 8579.258 - 8638.836: 2.0725% ( 31) 00:30:17.734 8638.836 - 8698.415: 2.4045% ( 41) 00:30:17.734 8698.415 - 8757.993: 2.7850% ( 47) 00:30:17.734 8757.993 - 8817.571: 3.2302% ( 55) 00:30:17.734 8817.571 - 8877.149: 3.6998% ( 58) 00:30:17.734 8877.149 - 8936.727: 4.2503% ( 68) 00:30:17.734 8936.727 - 8996.305: 4.9304% ( 84) 00:30:17.734 8996.305 - 9055.884: 5.8371% ( 112) 00:30:17.734 9055.884 - 9115.462: 6.8410% ( 124) 00:30:17.734 9115.462 - 9175.040: 8.0959% ( 155) 00:30:17.734 9175.040 - 9234.618: 9.4479% ( 167) 00:30:17.734 9234.618 - 9294.196: 11.2047% ( 217) 00:30:17.734 9294.196 - 9353.775: 13.2529% ( 253) 00:30:17.734 9353.775 - 9413.353: 15.8355% ( 319) 00:30:17.734 9413.353 - 9472.931: 18.8148% ( 368) 00:30:17.734 9472.931 - 9532.509: 22.2312% ( 422) 00:30:17.734 9532.509 - 9592.087: 25.7610% ( 436) 00:30:17.734 9592.087 - 9651.665: 29.4770% ( 459) 00:30:17.734 9651.665 - 9711.244: 33.4359% ( 489) 00:30:17.734 9711.244 - 9770.822: 37.5891% ( 513) 00:30:17.734 9770.822 - 9830.400: 41.5722% ( 492) 00:30:17.734 9830.400 - 9889.978: 45.6687% ( 506) 00:30:17.734 9889.978 - 9949.556: 49.5871% ( 484) 00:30:17.734 9949.556 - 10009.135: 53.5379% ( 488) 00:30:17.734 10009.135 - 10068.713: 57.3834% ( 475) 00:30:17.734 10068.713 - 10128.291: 61.1156% ( 461) 00:30:17.734 10128.291 - 10187.869: 64.7183% ( 445) 00:30:17.734 10187.869 - 10247.447: 68.1185% ( 420) 00:30:17.734 10247.447 - 10307.025: 71.2759% ( 390) 00:30:17.734 10307.025 - 10366.604: 74.2309% ( 365) 00:30:17.734 10366.604 - 10426.182: 76.9673% ( 338) 00:30:17.734 10426.182 - 10485.760: 79.2260% ( 279) 00:30:17.734 10485.760 - 10545.338: 81.0881% ( 230) 00:30:17.734 10545.338 - 10604.916: 82.6425% ( 192) 00:30:17.734 10604.916 - 10664.495: 83.8002% ( 143) 00:30:17.734 10664.495 - 10724.073: 84.7798% ( 121) 00:30:17.734 10724.073 - 10783.651: 85.5732% ( 98) 00:30:17.734 10783.651 - 10843.229: 86.2128% ( 79) 00:30:17.734 10843.229 - 10902.807: 86.6418% ( 53) 00:30:17.734 10902.807 - 10962.385: 87.0223% ( 47) 00:30:17.734 10962.385 - 11021.964: 87.3381% ( 39) 00:30:17.734 11021.964 - 11081.542: 87.6619% ( 40) 00:30:17.734 11081.542 - 11141.120: 87.9453% ( 35) 00:30:17.734 11141.120 - 11200.698: 88.2367% ( 36) 00:30:17.734 11200.698 - 11260.276: 88.4796% ( 30) 00:30:17.735 11260.276 - 11319.855: 88.6739% ( 24) 00:30:17.735 11319.855 - 11379.433: 88.9168% ( 30) 00:30:17.735 11379.433 - 11439.011: 89.1192% ( 25) 00:30:17.735 11439.011 - 11498.589: 89.3782% ( 32) 00:30:17.735 11498.589 - 11558.167: 89.6454% ( 33) 00:30:17.735 11558.167 - 11617.745: 89.9126% ( 33) 00:30:17.735 11617.745 - 11677.324: 90.2607% ( 43) 00:30:17.735 11677.324 - 11736.902: 90.6574% ( 49) 00:30:17.735 11736.902 - 11796.480: 91.0865% ( 53) 00:30:17.735 11796.480 - 11856.058: 91.5317% ( 55) 00:30:17.735 11856.058 - 11915.636: 91.9689% ( 54) 00:30:17.735 11915.636 - 11975.215: 92.4385% ( 58) 00:30:17.735 11975.215 - 12034.793: 92.9242% ( 60) 00:30:17.735 12034.793 - 12094.371: 93.4100% ( 60) 00:30:17.735 12094.371 - 12153.949: 93.8714% ( 57) 00:30:17.735 12153.949 - 12213.527: 94.3248% ( 56) 00:30:17.735 12213.527 - 12273.105: 94.7296% ( 50) 00:30:17.735 12273.105 - 12332.684: 95.1182% ( 48) 00:30:17.735 12332.684 - 12392.262: 95.4501% ( 41) 00:30:17.735 12392.262 - 12451.840: 95.7740% ( 40) 00:30:17.735 12451.840 - 12511.418: 95.9926% ( 27) 00:30:17.735 12511.418 - 12570.996: 96.2354% ( 30) 00:30:17.735 12570.996 - 12630.575: 96.3892% ( 19) 00:30:17.735 12630.575 - 12690.153: 96.4945% ( 13) 00:30:17.735 12690.153 - 12749.731: 96.5755% ( 10) 00:30:17.735 12749.731 - 12809.309: 96.6564% ( 10) 00:30:17.735 12809.309 - 12868.887: 96.7617% ( 13) 00:30:17.735 12868.887 - 12928.465: 96.8426% ( 10) 00:30:17.735 12928.465 - 12988.044: 96.9155% ( 9) 00:30:17.735 12988.044 - 13047.622: 96.9964% ( 10) 00:30:17.735 13047.622 - 13107.200: 97.0855% ( 11) 00:30:17.735 13107.200 - 13166.778: 97.1665% ( 10) 00:30:17.735 13166.778 - 13226.356: 97.2393% ( 9) 00:30:17.735 13226.356 - 13285.935: 97.3041% ( 8) 00:30:17.735 13285.935 - 13345.513: 97.3688% ( 8) 00:30:17.735 13345.513 - 13405.091: 97.4336% ( 8) 00:30:17.735 13405.091 - 13464.669: 97.4984% ( 8) 00:30:17.735 13464.669 - 13524.247: 97.5631% ( 8) 00:30:17.735 13524.247 - 13583.825: 97.6279% ( 8) 00:30:17.735 13583.825 - 13643.404: 97.7008% ( 9) 00:30:17.735 13643.404 - 13702.982: 97.7898% ( 11) 00:30:17.735 13702.982 - 13762.560: 97.8870% ( 12) 00:30:17.735 13762.560 - 13822.138: 98.0084% ( 15) 00:30:17.735 13822.138 - 13881.716: 98.1137% ( 13) 00:30:17.735 13881.716 - 13941.295: 98.1784% ( 8) 00:30:17.735 13941.295 - 14000.873: 98.2432% ( 8) 00:30:17.735 14000.873 - 14060.451: 98.3080% ( 8) 00:30:17.735 14060.451 - 14120.029: 98.3727% ( 8) 00:30:17.735 14120.029 - 14179.607: 98.4375% ( 8) 00:30:17.735 14179.607 - 14239.185: 98.5023% ( 8) 00:30:17.735 14239.185 - 14298.764: 98.5751% ( 9) 00:30:17.735 14298.764 - 14358.342: 98.6399% ( 8) 00:30:17.735 14358.342 - 14417.920: 98.7047% ( 8) 00:30:17.735 14417.920 - 14477.498: 98.7613% ( 7) 00:30:17.735 14477.498 - 14537.076: 98.8180% ( 7) 00:30:17.735 14537.076 - 14596.655: 98.8909% ( 9) 00:30:17.735 14596.655 - 14656.233: 98.9152% ( 3) 00:30:17.735 14656.233 - 14715.811: 98.9475% ( 4) 00:30:17.735 14715.811 - 14775.389: 98.9637% ( 2) 00:30:17.735 22282.240 - 22401.396: 98.9799% ( 2) 00:30:17.735 22401.396 - 22520.553: 99.0123% ( 4) 00:30:17.735 22520.553 - 22639.709: 99.0447% ( 4) 00:30:17.735 22639.709 - 22758.865: 99.0690% ( 3) 00:30:17.735 22758.865 - 22878.022: 99.1014% ( 4) 00:30:17.735 22878.022 - 22997.178: 99.1256% ( 3) 00:30:17.735 22997.178 - 23116.335: 99.1499% ( 3) 00:30:17.735 23116.335 - 23235.491: 99.1823% ( 4) 00:30:17.735 23235.491 - 23354.647: 99.2147% ( 4) 00:30:17.735 23354.647 - 23473.804: 99.2390% ( 3) 00:30:17.735 23473.804 - 23592.960: 99.2552% ( 2) 00:30:17.735 23592.960 - 23712.116: 99.2795% ( 3) 00:30:17.735 23712.116 - 23831.273: 99.3119% ( 4) 00:30:17.735 23831.273 - 23950.429: 99.3442% ( 4) 00:30:17.735 23950.429 - 24069.585: 99.3766% ( 4) 00:30:17.735 24069.585 - 24188.742: 99.4090% ( 4) 00:30:17.735 24188.742 - 24307.898: 99.4333% ( 3) 00:30:17.735 24307.898 - 24427.055: 99.4657% ( 4) 00:30:17.735 24427.055 - 24546.211: 99.4819% ( 2) 00:30:17.735 28597.527 - 28716.684: 99.5062% ( 3) 00:30:17.735 28716.684 - 28835.840: 99.5304% ( 3) 00:30:17.735 28835.840 - 28954.996: 99.5628% ( 4) 00:30:17.735 28954.996 - 29074.153: 99.5952% ( 4) 00:30:17.735 29074.153 - 29193.309: 99.6195% ( 3) 00:30:17.735 29193.309 - 29312.465: 99.6519% ( 4) 00:30:17.735 29312.465 - 29431.622: 99.6843% ( 4) 00:30:17.735 29431.622 - 29550.778: 99.7085% ( 3) 00:30:17.735 29550.778 - 29669.935: 99.7409% ( 4) 00:30:17.735 29669.935 - 29789.091: 99.7652% ( 3) 00:30:17.735 29789.091 - 29908.247: 99.7976% ( 4) 00:30:17.735 29908.247 - 30027.404: 99.8300% ( 4) 00:30:17.735 30027.404 - 30146.560: 99.8624% ( 4) 00:30:17.735 30146.560 - 30265.716: 99.8867% ( 3) 00:30:17.735 30265.716 - 30384.873: 99.9190% ( 4) 00:30:17.735 30384.873 - 30504.029: 99.9514% ( 4) 00:30:17.735 30504.029 - 30742.342: 100.0000% ( 6) 00:30:17.735 00:30:17.735 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:17.735 ============================================================================== 00:30:17.735 Range in us Cumulative IO count 00:30:17.735 7983.476 - 8043.055: 0.0322% ( 4) 00:30:17.735 8043.055 - 8102.633: 0.1450% ( 14) 00:30:17.735 8102.633 - 8162.211: 0.3141% ( 21) 00:30:17.735 8162.211 - 8221.789: 0.4832% ( 21) 00:30:17.735 8221.789 - 8281.367: 0.6927% ( 26) 00:30:17.735 8281.367 - 8340.945: 0.8860% ( 24) 00:30:17.735 8340.945 - 8400.524: 1.1034% ( 27) 00:30:17.735 8400.524 - 8460.102: 1.3450% ( 30) 00:30:17.735 8460.102 - 8519.680: 1.6028% ( 32) 00:30:17.735 8519.680 - 8579.258: 1.8283% ( 28) 00:30:17.735 8579.258 - 8638.836: 2.0619% ( 29) 00:30:17.735 8638.836 - 8698.415: 2.3438% ( 35) 00:30:17.735 8698.415 - 8757.993: 2.6498% ( 38) 00:30:17.735 8757.993 - 8817.571: 3.0445% ( 49) 00:30:17.735 8817.571 - 8877.149: 3.5035% ( 57) 00:30:17.735 8877.149 - 8936.727: 4.0915% ( 73) 00:30:17.735 8936.727 - 8996.305: 4.9452% ( 106) 00:30:17.735 8996.305 - 9055.884: 5.8795% ( 116) 00:30:17.735 9055.884 - 9115.462: 6.9588% ( 134) 00:30:17.735 9115.462 - 9175.040: 8.2394% ( 159) 00:30:17.735 9175.040 - 9234.618: 9.6166% ( 171) 00:30:17.735 9234.618 - 9294.196: 11.2838% ( 207) 00:30:17.735 9294.196 - 9353.775: 13.3940% ( 262) 00:30:17.735 9353.775 - 9413.353: 16.0197% ( 326) 00:30:17.735 9413.353 - 9472.931: 19.1285% ( 386) 00:30:17.735 9472.931 - 9532.509: 22.5757% ( 428) 00:30:17.735 9532.509 - 9592.087: 26.1759% ( 447) 00:30:17.735 9592.087 - 9651.665: 29.9855% ( 473) 00:30:17.735 9651.665 - 9711.244: 33.9562% ( 493) 00:30:17.735 9711.244 - 9770.822: 38.0155% ( 504) 00:30:17.735 9770.822 - 9830.400: 41.9539% ( 489) 00:30:17.735 9830.400 - 9889.978: 46.0132% ( 504) 00:30:17.735 9889.978 - 9949.556: 49.9758% ( 492) 00:30:17.735 9949.556 - 10009.135: 53.9062% ( 488) 00:30:17.735 10009.135 - 10068.713: 57.7642% ( 479) 00:30:17.735 10068.713 - 10128.291: 61.3724% ( 448) 00:30:17.735 10128.291 - 10187.869: 64.7552% ( 420) 00:30:17.735 10187.869 - 10247.447: 68.0573% ( 410) 00:30:17.735 10247.447 - 10307.025: 71.1743% ( 387) 00:30:17.735 10307.025 - 10366.604: 74.1140% ( 365) 00:30:17.735 10366.604 - 10426.182: 76.6914% ( 320) 00:30:17.735 10426.182 - 10485.760: 78.8901% ( 273) 00:30:17.735 10485.760 - 10545.338: 80.7909% ( 236) 00:30:17.735 10545.338 - 10604.916: 82.1440% ( 168) 00:30:17.735 10604.916 - 10664.495: 83.3038% ( 144) 00:30:17.735 10664.495 - 10724.073: 84.2622% ( 119) 00:30:17.735 10724.073 - 10783.651: 85.1643% ( 112) 00:30:17.735 10783.651 - 10843.229: 85.8570% ( 86) 00:30:17.735 10843.229 - 10902.807: 86.3966% ( 67) 00:30:17.735 10902.807 - 10962.385: 86.8396% ( 55) 00:30:17.735 10962.385 - 11021.964: 87.2584% ( 52) 00:30:17.735 11021.964 - 11081.542: 87.6369% ( 47) 00:30:17.735 11081.542 - 11141.120: 87.9832% ( 43) 00:30:17.735 11141.120 - 11200.698: 88.3215% ( 42) 00:30:17.735 11200.698 - 11260.276: 88.6034% ( 35) 00:30:17.735 11260.276 - 11319.855: 88.9014% ( 37) 00:30:17.735 11319.855 - 11379.433: 89.1753% ( 34) 00:30:17.735 11379.433 - 11439.011: 89.5055% ( 41) 00:30:17.735 11439.011 - 11498.589: 89.8196% ( 39) 00:30:17.735 11498.589 - 11558.167: 90.1095% ( 36) 00:30:17.735 11558.167 - 11617.745: 90.3351% ( 28) 00:30:17.735 11617.745 - 11677.324: 90.6169% ( 35) 00:30:17.735 11677.324 - 11736.902: 90.9552% ( 42) 00:30:17.735 11736.902 - 11796.480: 91.3418% ( 48) 00:30:17.735 11796.480 - 11856.058: 91.7687% ( 53) 00:30:17.735 11856.058 - 11915.636: 92.1553% ( 48) 00:30:17.735 11915.636 - 11975.215: 92.5902% ( 54) 00:30:17.736 11975.215 - 12034.793: 93.0171% ( 53) 00:30:17.736 12034.793 - 12094.371: 93.4439% ( 53) 00:30:17.736 12094.371 - 12153.949: 93.8789% ( 54) 00:30:17.736 12153.949 - 12213.527: 94.3218% ( 55) 00:30:17.736 12213.527 - 12273.105: 94.7084% ( 48) 00:30:17.736 12273.105 - 12332.684: 95.1031% ( 49) 00:30:17.736 12332.684 - 12392.262: 95.4333% ( 41) 00:30:17.736 12392.262 - 12451.840: 95.7394% ( 38) 00:30:17.736 12451.840 - 12511.418: 95.9729% ( 29) 00:30:17.736 12511.418 - 12570.996: 96.1904% ( 27) 00:30:17.736 12570.996 - 12630.575: 96.3837% ( 24) 00:30:17.736 12630.575 - 12690.153: 96.5367% ( 19) 00:30:17.736 12690.153 - 12749.731: 96.6173% ( 10) 00:30:17.736 12749.731 - 12809.309: 96.6898% ( 9) 00:30:17.736 12809.309 - 12868.887: 96.7381% ( 6) 00:30:17.736 12868.887 - 12928.465: 96.7945% ( 7) 00:30:17.736 12928.465 - 12988.044: 96.8750% ( 10) 00:30:17.736 12988.044 - 13047.622: 96.9797% ( 13) 00:30:17.736 13047.622 - 13107.200: 97.0683% ( 11) 00:30:17.736 13107.200 - 13166.778: 97.1327% ( 8) 00:30:17.736 13166.778 - 13226.356: 97.1891% ( 7) 00:30:17.736 13226.356 - 13285.935: 97.2455% ( 7) 00:30:17.736 13285.935 - 13345.513: 97.3099% ( 8) 00:30:17.736 13345.513 - 13405.091: 97.3663% ( 7) 00:30:17.736 13405.091 - 13464.669: 97.4630% ( 12) 00:30:17.736 13464.669 - 13524.247: 97.5596% ( 12) 00:30:17.736 13524.247 - 13583.825: 97.6401% ( 10) 00:30:17.736 13583.825 - 13643.404: 97.7368% ( 12) 00:30:17.736 13643.404 - 13702.982: 97.8334% ( 12) 00:30:17.736 13702.982 - 13762.560: 97.9301% ( 12) 00:30:17.736 13762.560 - 13822.138: 98.0267% ( 12) 00:30:17.736 13822.138 - 13881.716: 98.1395% ( 14) 00:30:17.736 13881.716 - 13941.295: 98.2361% ( 12) 00:30:17.736 13941.295 - 14000.873: 98.3328% ( 12) 00:30:17.736 14000.873 - 14060.451: 98.4053% ( 9) 00:30:17.736 14060.451 - 14120.029: 98.4697% ( 8) 00:30:17.736 14120.029 - 14179.607: 98.5341% ( 8) 00:30:17.736 14179.607 - 14239.185: 98.5986% ( 8) 00:30:17.736 14239.185 - 14298.764: 98.6630% ( 8) 00:30:17.736 14298.764 - 14358.342: 98.7274% ( 8) 00:30:17.736 14358.342 - 14417.920: 98.7758% ( 6) 00:30:17.736 14417.920 - 14477.498: 98.8080% ( 4) 00:30:17.736 14477.498 - 14537.076: 98.8322% ( 3) 00:30:17.736 14537.076 - 14596.655: 98.8644% ( 4) 00:30:17.736 14596.655 - 14656.233: 98.8966% ( 4) 00:30:17.736 14656.233 - 14715.811: 98.9288% ( 4) 00:30:17.736 14715.811 - 14775.389: 98.9610% ( 4) 00:30:17.736 14775.389 - 14834.967: 98.9691% ( 1) 00:30:17.736 15252.015 - 15371.171: 98.9771% ( 1) 00:30:17.736 15371.171 - 15490.327: 99.0013% ( 3) 00:30:17.736 15490.327 - 15609.484: 99.0335% ( 4) 00:30:17.736 15609.484 - 15728.640: 99.0657% ( 4) 00:30:17.736 15728.640 - 15847.796: 99.0899% ( 3) 00:30:17.736 15847.796 - 15966.953: 99.1221% ( 4) 00:30:17.736 15966.953 - 16086.109: 99.1543% ( 4) 00:30:17.736 16086.109 - 16205.265: 99.1785% ( 3) 00:30:17.736 16205.265 - 16324.422: 99.2107% ( 4) 00:30:17.736 16324.422 - 16443.578: 99.2429% ( 4) 00:30:17.736 16443.578 - 16562.735: 99.2751% ( 4) 00:30:17.736 16562.735 - 16681.891: 99.3073% ( 4) 00:30:17.736 16681.891 - 16801.047: 99.3396% ( 4) 00:30:17.736 16801.047 - 16920.204: 99.3637% ( 3) 00:30:17.736 16920.204 - 17039.360: 99.3959% ( 4) 00:30:17.736 17039.360 - 17158.516: 99.4282% ( 4) 00:30:17.736 17158.516 - 17277.673: 99.4604% ( 4) 00:30:17.736 17277.673 - 17396.829: 99.4845% ( 3) 00:30:17.736 21448.145 - 21567.302: 99.5006% ( 2) 00:30:17.736 21567.302 - 21686.458: 99.5248% ( 3) 00:30:17.736 21686.458 - 21805.615: 99.5570% ( 4) 00:30:17.736 21805.615 - 21924.771: 99.5892% ( 4) 00:30:17.736 21924.771 - 22043.927: 99.6215% ( 4) 00:30:17.736 22043.927 - 22163.084: 99.6456% ( 3) 00:30:17.736 22163.084 - 22282.240: 99.6698% ( 3) 00:30:17.736 22282.240 - 22401.396: 99.7020% ( 4) 00:30:17.736 22401.396 - 22520.553: 99.7342% ( 4) 00:30:17.736 22520.553 - 22639.709: 99.7664% ( 4) 00:30:17.736 22639.709 - 22758.865: 99.7906% ( 3) 00:30:17.736 22758.865 - 22878.022: 99.8228% ( 4) 00:30:17.736 22878.022 - 22997.178: 99.8470% ( 3) 00:30:17.736 22997.178 - 23116.335: 99.8792% ( 4) 00:30:17.736 23116.335 - 23235.491: 99.9114% ( 4) 00:30:17.736 23235.491 - 23354.647: 99.9436% ( 4) 00:30:17.736 23354.647 - 23473.804: 99.9758% ( 4) 00:30:17.736 23473.804 - 23592.960: 100.0000% ( 3) 00:30:17.736 00:30:17.736 01:59:26 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:19.108 Initializing NVMe Controllers 00:30:19.108 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:19.108 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:19.108 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:19.108 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:19.108 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:30:19.108 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:30:19.108 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:30:19.108 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:30:19.108 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:30:19.108 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:30:19.108 Initialization complete. Launching workers. 00:30:19.108 ======================================================== 00:30:19.108 Latency(us) 00:30:19.108 Device Information : IOPS MiB/s Average min max 00:30:19.108 PCIE (0000:00:10.0) NSID 1 from core 0: 10878.90 127.49 11795.00 9657.93 40851.38 00:30:19.108 PCIE (0000:00:11.0) NSID 1 from core 0: 10878.90 127.49 11770.94 9678.35 38151.83 00:30:19.108 PCIE (0000:00:13.0) NSID 1 from core 0: 10878.90 127.49 11746.51 9631.15 36309.08 00:30:19.108 PCIE (0000:00:12.0) NSID 1 from core 0: 10878.90 127.49 11721.79 9765.60 33792.65 00:30:19.108 PCIE (0000:00:12.0) NSID 2 from core 0: 10878.90 127.49 11697.81 9695.89 31369.05 00:30:19.108 PCIE (0000:00:12.0) NSID 3 from core 0: 10942.89 128.24 11605.86 9766.64 23521.58 00:30:19.108 ======================================================== 00:30:19.108 Total : 65337.40 765.67 11722.87 9631.15 40851.38 00:30:19.108 00:30:19.108 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:19.108 ================================================================================= 00:30:19.108 1.00000% : 9889.978us 00:30:19.108 10.00000% : 10366.604us 00:30:19.108 25.00000% : 10724.073us 00:30:19.108 50.00000% : 11260.276us 00:30:19.108 75.00000% : 12034.793us 00:30:19.108 90.00000% : 13464.669us 00:30:19.108 95.00000% : 14298.764us 00:30:19.108 98.00000% : 15073.280us 00:30:19.108 99.00000% : 31695.593us 00:30:19.108 99.50000% : 38844.975us 00:30:19.108 99.90000% : 40513.164us 00:30:19.108 99.99000% : 40989.789us 00:30:19.108 99.99900% : 40989.789us 00:30:19.108 99.99990% : 40989.789us 00:30:19.108 99.99999% : 40989.789us 00:30:19.108 00:30:19.108 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:19.108 ================================================================================= 00:30:19.108 1.00000% : 10128.291us 00:30:19.108 10.00000% : 10485.760us 00:30:19.108 25.00000% : 10783.651us 00:30:19.108 50.00000% : 11260.276us 00:30:19.108 75.00000% : 11915.636us 00:30:19.108 90.00000% : 13524.247us 00:30:19.108 95.00000% : 14298.764us 00:30:19.108 98.00000% : 14954.124us 00:30:19.108 99.00000% : 29312.465us 00:30:19.108 99.50000% : 36461.847us 00:30:19.108 99.90000% : 37891.724us 00:30:19.108 99.99000% : 38130.036us 00:30:19.108 99.99900% : 38368.349us 00:30:19.108 99.99990% : 38368.349us 00:30:19.108 99.99999% : 38368.349us 00:30:19.108 00:30:19.109 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:19.109 ================================================================================= 00:30:19.109 1.00000% : 10068.713us 00:30:19.109 10.00000% : 10485.760us 00:30:19.109 25.00000% : 10783.651us 00:30:19.109 50.00000% : 11260.276us 00:30:19.109 75.00000% : 11975.215us 00:30:19.109 90.00000% : 13464.669us 00:30:19.109 95.00000% : 14239.185us 00:30:19.109 98.00000% : 14894.545us 00:30:19.109 99.00000% : 27525.120us 00:30:19.109 99.50000% : 34555.345us 00:30:19.109 99.90000% : 36223.535us 00:30:19.109 99.99000% : 36461.847us 00:30:19.109 99.99900% : 36461.847us 00:30:19.109 99.99990% : 36461.847us 00:30:19.109 99.99999% : 36461.847us 00:30:19.109 00:30:19.109 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:19.109 ================================================================================= 00:30:19.109 1.00000% : 10068.713us 00:30:19.109 10.00000% : 10545.338us 00:30:19.109 25.00000% : 10783.651us 00:30:19.109 50.00000% : 11260.276us 00:30:19.109 75.00000% : 11915.636us 00:30:19.109 90.00000% : 13583.825us 00:30:19.109 95.00000% : 14239.185us 00:30:19.109 98.00000% : 14954.124us 00:30:19.109 99.00000% : 25380.305us 00:30:19.109 99.50000% : 32172.218us 00:30:19.109 99.90000% : 33602.095us 00:30:19.109 99.99000% : 33840.407us 00:30:19.109 99.99900% : 33840.407us 00:30:19.109 99.99990% : 33840.407us 00:30:19.109 99.99999% : 33840.407us 00:30:19.109 00:30:19.109 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:19.109 ================================================================================= 00:30:19.109 1.00000% : 10068.713us 00:30:19.109 10.00000% : 10545.338us 00:30:19.109 25.00000% : 10783.651us 00:30:19.109 50.00000% : 11260.276us 00:30:19.109 75.00000% : 11975.215us 00:30:19.109 90.00000% : 13643.404us 00:30:19.109 95.00000% : 14239.185us 00:30:19.109 98.00000% : 14894.545us 00:30:19.109 99.00000% : 22997.178us 00:30:19.109 99.50000% : 29550.778us 00:30:19.109 99.90000% : 31218.967us 00:30:19.109 99.99000% : 31457.280us 00:30:19.109 99.99900% : 31457.280us 00:30:19.109 99.99990% : 31457.280us 00:30:19.109 99.99999% : 31457.280us 00:30:19.109 00:30:19.109 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:19.109 ================================================================================= 00:30:19.109 1.00000% : 10128.291us 00:30:19.109 10.00000% : 10545.338us 00:30:19.109 25.00000% : 10783.651us 00:30:19.109 50.00000% : 11260.276us 00:30:19.109 75.00000% : 11915.636us 00:30:19.109 90.00000% : 13583.825us 00:30:19.109 95.00000% : 14358.342us 00:30:19.109 98.00000% : 14894.545us 00:30:19.109 99.00000% : 15490.327us 00:30:19.109 99.50000% : 21805.615us 00:30:19.109 99.90000% : 23235.491us 00:30:19.109 99.99000% : 23592.960us 00:30:19.109 99.99900% : 23592.960us 00:30:19.109 99.99990% : 23592.960us 00:30:19.109 99.99999% : 23592.960us 00:30:19.109 00:30:19.109 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:19.109 ============================================================================== 00:30:19.109 Range in us Cumulative IO count 00:30:19.109 9651.665 - 9711.244: 0.0276% ( 3) 00:30:19.109 9711.244 - 9770.822: 0.2757% ( 27) 00:30:19.109 9770.822 - 9830.400: 0.6801% ( 44) 00:30:19.109 9830.400 - 9889.978: 1.1949% ( 56) 00:30:19.109 9889.978 - 9949.556: 1.5257% ( 36) 00:30:19.109 9949.556 - 10009.135: 2.0680% ( 59) 00:30:19.109 10009.135 - 10068.713: 2.6287% ( 61) 00:30:19.109 10068.713 - 10128.291: 3.8235% ( 130) 00:30:19.109 10128.291 - 10187.869: 4.9816% ( 126) 00:30:19.109 10187.869 - 10247.447: 6.4614% ( 161) 00:30:19.109 10247.447 - 10307.025: 7.9596% ( 163) 00:30:19.109 10307.025 - 10366.604: 10.4136% ( 267) 00:30:19.109 10366.604 - 10426.182: 12.8125% ( 261) 00:30:19.109 10426.182 - 10485.760: 15.2941% ( 270) 00:30:19.109 10485.760 - 10545.338: 17.6471% ( 256) 00:30:19.109 10545.338 - 10604.916: 20.2574% ( 284) 00:30:19.109 10604.916 - 10664.495: 22.8585% ( 283) 00:30:19.109 10664.495 - 10724.073: 25.6066% ( 299) 00:30:19.109 10724.073 - 10783.651: 28.2353% ( 286) 00:30:19.109 10783.651 - 10843.229: 30.8088% ( 280) 00:30:19.109 10843.229 - 10902.807: 33.5754% ( 301) 00:30:19.109 10902.807 - 10962.385: 36.2316% ( 289) 00:30:19.109 10962.385 - 11021.964: 39.4761% ( 353) 00:30:19.109 11021.964 - 11081.542: 41.7279% ( 245) 00:30:19.109 11081.542 - 11141.120: 44.6140% ( 314) 00:30:19.109 11141.120 - 11200.698: 47.4632% ( 310) 00:30:19.109 11200.698 - 11260.276: 50.4963% ( 330) 00:30:19.109 11260.276 - 11319.855: 53.9246% ( 373) 00:30:19.109 11319.855 - 11379.433: 56.8750% ( 321) 00:30:19.109 11379.433 - 11439.011: 59.4301% ( 278) 00:30:19.109 11439.011 - 11498.589: 61.7096% ( 248) 00:30:19.109 11498.589 - 11558.167: 63.9890% ( 248) 00:30:19.109 11558.167 - 11617.745: 65.9099% ( 209) 00:30:19.109 11617.745 - 11677.324: 67.4908% ( 172) 00:30:19.109 11677.324 - 11736.902: 69.0441% ( 169) 00:30:19.109 11736.902 - 11796.480: 70.4963% ( 158) 00:30:19.109 11796.480 - 11856.058: 71.9485% ( 158) 00:30:19.109 11856.058 - 11915.636: 73.1342% ( 129) 00:30:19.109 11915.636 - 11975.215: 74.3934% ( 137) 00:30:19.109 11975.215 - 12034.793: 75.7169% ( 144) 00:30:19.109 12034.793 - 12094.371: 76.7096% ( 108) 00:30:19.109 12094.371 - 12153.949: 77.7849% ( 117) 00:30:19.109 12153.949 - 12213.527: 78.8603% ( 117) 00:30:19.109 12213.527 - 12273.105: 80.0368% ( 128) 00:30:19.109 12273.105 - 12332.684: 80.8364% ( 87) 00:30:19.109 12332.684 - 12392.262: 81.7188% ( 96) 00:30:19.109 12392.262 - 12451.840: 82.4724% ( 82) 00:30:19.109 12451.840 - 12511.418: 83.2077% ( 80) 00:30:19.109 12511.418 - 12570.996: 83.7224% ( 56) 00:30:19.109 12570.996 - 12630.575: 84.2647% ( 59) 00:30:19.109 12630.575 - 12690.153: 84.8254% ( 61) 00:30:19.109 12690.153 - 12749.731: 85.2390% ( 45) 00:30:19.109 12749.731 - 12809.309: 85.6342% ( 43) 00:30:19.109 12809.309 - 12868.887: 86.0570% ( 46) 00:30:19.109 12868.887 - 12928.465: 86.4430% ( 42) 00:30:19.109 12928.465 - 12988.044: 86.7555% ( 34) 00:30:19.109 12988.044 - 13047.622: 87.1783% ( 46) 00:30:19.109 13047.622 - 13107.200: 87.5643% ( 42) 00:30:19.109 13107.200 - 13166.778: 88.0147% ( 49) 00:30:19.109 13166.778 - 13226.356: 88.3915% ( 41) 00:30:19.109 13226.356 - 13285.935: 88.8235% ( 47) 00:30:19.109 13285.935 - 13345.513: 89.2279% ( 44) 00:30:19.109 13345.513 - 13405.091: 89.6875% ( 50) 00:30:19.109 13405.091 - 13464.669: 90.2298% ( 59) 00:30:19.109 13464.669 - 13524.247: 90.6250% ( 43) 00:30:19.109 13524.247 - 13583.825: 91.0386% ( 45) 00:30:19.109 13583.825 - 13643.404: 91.4338% ( 43) 00:30:19.109 13643.404 - 13702.982: 91.7555% ( 35) 00:30:19.109 13702.982 - 13762.560: 92.0496% ( 32) 00:30:19.109 13762.560 - 13822.138: 92.4081% ( 39) 00:30:19.109 13822.138 - 13881.716: 92.8125% ( 44) 00:30:19.109 13881.716 - 13941.295: 93.0239% ( 23) 00:30:19.109 13941.295 - 14000.873: 93.2904% ( 29) 00:30:19.109 14000.873 - 14060.451: 93.6397% ( 38) 00:30:19.109 14060.451 - 14120.029: 93.9890% ( 38) 00:30:19.109 14120.029 - 14179.607: 94.2739% ( 31) 00:30:19.109 14179.607 - 14239.185: 94.6415% ( 40) 00:30:19.109 14239.185 - 14298.764: 95.0735% ( 47) 00:30:19.109 14298.764 - 14358.342: 95.4596% ( 42) 00:30:19.109 14358.342 - 14417.920: 95.7445% ( 31) 00:30:19.109 14417.920 - 14477.498: 96.0018% ( 28) 00:30:19.109 14477.498 - 14537.076: 96.3143% ( 34) 00:30:19.109 14537.076 - 14596.655: 96.5257% ( 23) 00:30:19.109 14596.655 - 14656.233: 96.7279% ( 22) 00:30:19.109 14656.233 - 14715.811: 96.9669% ( 26) 00:30:19.109 14715.811 - 14775.389: 97.1783% ( 23) 00:30:19.109 14775.389 - 14834.967: 97.3346% ( 17) 00:30:19.109 14834.967 - 14894.545: 97.5276% ( 21) 00:30:19.109 14894.545 - 14954.124: 97.7482% ( 24) 00:30:19.109 14954.124 - 15013.702: 97.8676% ( 13) 00:30:19.109 15013.702 - 15073.280: 98.0515% ( 20) 00:30:19.109 15073.280 - 15132.858: 98.1893% ( 15) 00:30:19.109 15132.858 - 15192.436: 98.2537% ( 7) 00:30:19.109 15192.436 - 15252.015: 98.3548% ( 11) 00:30:19.109 15252.015 - 15371.171: 98.5202% ( 18) 00:30:19.109 15371.171 - 15490.327: 98.6213% ( 11) 00:30:19.109 15490.327 - 15609.484: 98.7040% ( 9) 00:30:19.109 15609.484 - 15728.640: 98.7776% ( 8) 00:30:19.109 15728.640 - 15847.796: 98.8235% ( 5) 00:30:19.109 30980.655 - 31218.967: 98.8695% ( 5) 00:30:19.109 31218.967 - 31457.280: 98.9706% ( 11) 00:30:19.109 31457.280 - 31695.593: 99.0165% ( 5) 00:30:19.109 31695.593 - 31933.905: 99.0625% ( 5) 00:30:19.109 31933.905 - 32172.218: 99.1268% ( 7) 00:30:19.109 32172.218 - 32410.531: 99.1820% ( 6) 00:30:19.109 32410.531 - 32648.844: 99.2463% ( 7) 00:30:19.109 32648.844 - 32887.156: 99.3015% ( 6) 00:30:19.109 32887.156 - 33125.469: 99.3566% ( 6) 00:30:19.109 33125.469 - 33363.782: 99.4118% ( 6) 00:30:19.109 38368.349 - 38606.662: 99.4485% ( 4) 00:30:19.109 38606.662 - 38844.975: 99.5129% ( 7) 00:30:19.109 38844.975 - 39083.287: 99.5680% ( 6) 00:30:19.109 39083.287 - 39321.600: 99.6232% ( 6) 00:30:19.109 39321.600 - 39559.913: 99.6875% ( 7) 00:30:19.109 39559.913 - 39798.225: 99.7518% ( 7) 00:30:19.109 39798.225 - 40036.538: 99.8070% ( 6) 00:30:19.109 40036.538 - 40274.851: 99.8621% ( 6) 00:30:19.109 40274.851 - 40513.164: 99.9173% ( 6) 00:30:19.109 40513.164 - 40751.476: 99.9816% ( 7) 00:30:19.109 40751.476 - 40989.789: 100.0000% ( 2) 00:30:19.109 00:30:19.109 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:19.109 ============================================================================== 00:30:19.109 Range in us Cumulative IO count 00:30:19.109 9651.665 - 9711.244: 0.0092% ( 1) 00:30:19.109 9770.822 - 9830.400: 0.0184% ( 1) 00:30:19.109 9830.400 - 9889.978: 0.0551% ( 4) 00:30:19.109 9889.978 - 9949.556: 0.2022% ( 16) 00:30:19.109 9949.556 - 10009.135: 0.4412% ( 26) 00:30:19.109 10009.135 - 10068.713: 0.7629% ( 35) 00:30:19.109 10068.713 - 10128.291: 1.4154% ( 71) 00:30:19.109 10128.291 - 10187.869: 2.0496% ( 69) 00:30:19.109 10187.869 - 10247.447: 2.8493% ( 87) 00:30:19.109 10247.447 - 10307.025: 3.9982% ( 125) 00:30:19.109 10307.025 - 10366.604: 5.6710% ( 182) 00:30:19.109 10366.604 - 10426.182: 7.7941% ( 231) 00:30:19.109 10426.182 - 10485.760: 10.2574% ( 268) 00:30:19.109 10485.760 - 10545.338: 13.1985% ( 320) 00:30:19.109 10545.338 - 10604.916: 16.6728% ( 378) 00:30:19.109 10604.916 - 10664.495: 20.2390% ( 388) 00:30:19.109 10664.495 - 10724.073: 23.5662% ( 362) 00:30:19.109 10724.073 - 10783.651: 26.9853% ( 372) 00:30:19.109 10783.651 - 10843.229: 30.2114% ( 351) 00:30:19.109 10843.229 - 10902.807: 33.4191% ( 349) 00:30:19.109 10902.807 - 10962.385: 36.3419% ( 318) 00:30:19.109 10962.385 - 11021.964: 39.1176% ( 302) 00:30:19.109 11021.964 - 11081.542: 42.1232% ( 327) 00:30:19.109 11081.542 - 11141.120: 44.7702% ( 288) 00:30:19.109 11141.120 - 11200.698: 47.9136% ( 342) 00:30:19.109 11200.698 - 11260.276: 51.2408% ( 362) 00:30:19.109 11260.276 - 11319.855: 54.4210% ( 346) 00:30:19.109 11319.855 - 11379.433: 57.5184% ( 337) 00:30:19.109 11379.433 - 11439.011: 60.4871% ( 323) 00:30:19.109 11439.011 - 11498.589: 63.1801% ( 293) 00:30:19.109 11498.589 - 11558.167: 65.6893% ( 273) 00:30:19.109 11558.167 - 11617.745: 68.1342% ( 266) 00:30:19.109 11617.745 - 11677.324: 70.1838% ( 223) 00:30:19.109 11677.324 - 11736.902: 71.9393% ( 191) 00:30:19.110 11736.902 - 11796.480: 73.2996% ( 148) 00:30:19.110 11796.480 - 11856.058: 74.5312% ( 134) 00:30:19.110 11856.058 - 11915.636: 75.6710% ( 124) 00:30:19.110 11915.636 - 11975.215: 76.6728% ( 109) 00:30:19.110 11975.215 - 12034.793: 77.5092% ( 91) 00:30:19.110 12034.793 - 12094.371: 78.4191% ( 99) 00:30:19.110 12094.371 - 12153.949: 79.1636% ( 81) 00:30:19.110 12153.949 - 12213.527: 79.9449% ( 85) 00:30:19.110 12213.527 - 12273.105: 80.6250% ( 74) 00:30:19.110 12273.105 - 12332.684: 81.4706% ( 92) 00:30:19.110 12332.684 - 12392.262: 82.0772% ( 66) 00:30:19.110 12392.262 - 12451.840: 82.7482% ( 73) 00:30:19.110 12451.840 - 12511.418: 83.4099% ( 72) 00:30:19.110 12511.418 - 12570.996: 84.0349% ( 68) 00:30:19.110 12570.996 - 12630.575: 84.6415% ( 66) 00:30:19.110 12630.575 - 12690.153: 85.1930% ( 60) 00:30:19.110 12690.153 - 12749.731: 85.6710% ( 52) 00:30:19.110 12749.731 - 12809.309: 86.0754% ( 44) 00:30:19.110 12809.309 - 12868.887: 86.6268% ( 60) 00:30:19.110 12868.887 - 12928.465: 87.0588% ( 47) 00:30:19.110 12928.465 - 12988.044: 87.4173% ( 39) 00:30:19.110 12988.044 - 13047.622: 87.7206% ( 33) 00:30:19.110 13047.622 - 13107.200: 88.0607% ( 37) 00:30:19.110 13107.200 - 13166.778: 88.3732% ( 34) 00:30:19.110 13166.778 - 13226.356: 88.7592% ( 42) 00:30:19.110 13226.356 - 13285.935: 89.1176% ( 39) 00:30:19.110 13285.935 - 13345.513: 89.4485% ( 36) 00:30:19.110 13345.513 - 13405.091: 89.6967% ( 27) 00:30:19.110 13405.091 - 13464.669: 89.8897% ( 21) 00:30:19.110 13464.669 - 13524.247: 90.0827% ( 21) 00:30:19.110 13524.247 - 13583.825: 90.3585% ( 30) 00:30:19.110 13583.825 - 13643.404: 90.6526% ( 32) 00:30:19.110 13643.404 - 13702.982: 91.1121% ( 50) 00:30:19.110 13702.982 - 13762.560: 91.6360% ( 57) 00:30:19.110 13762.560 - 13822.138: 92.1324% ( 54) 00:30:19.110 13822.138 - 13881.716: 92.5000% ( 40) 00:30:19.110 13881.716 - 13941.295: 92.8493% ( 38) 00:30:19.110 13941.295 - 14000.873: 93.1158% ( 29) 00:30:19.110 14000.873 - 14060.451: 93.4926% ( 41) 00:30:19.110 14060.451 - 14120.029: 93.9154% ( 46) 00:30:19.110 14120.029 - 14179.607: 94.4485% ( 58) 00:30:19.110 14179.607 - 14239.185: 94.8529% ( 44) 00:30:19.110 14239.185 - 14298.764: 95.2390% ( 42) 00:30:19.110 14298.764 - 14358.342: 95.6250% ( 42) 00:30:19.110 14358.342 - 14417.920: 95.9559% ( 36) 00:30:19.110 14417.920 - 14477.498: 96.2868% ( 36) 00:30:19.110 14477.498 - 14537.076: 96.5809% ( 32) 00:30:19.110 14537.076 - 14596.655: 96.8382% ( 28) 00:30:19.110 14596.655 - 14656.233: 97.0772% ( 26) 00:30:19.110 14656.233 - 14715.811: 97.3162% ( 26) 00:30:19.110 14715.811 - 14775.389: 97.5276% ( 23) 00:30:19.110 14775.389 - 14834.967: 97.7114% ( 20) 00:30:19.110 14834.967 - 14894.545: 97.8860% ( 19) 00:30:19.110 14894.545 - 14954.124: 98.0607% ( 19) 00:30:19.110 14954.124 - 15013.702: 98.1801% ( 13) 00:30:19.110 15013.702 - 15073.280: 98.3088% ( 14) 00:30:19.110 15073.280 - 15132.858: 98.4007% ( 10) 00:30:19.110 15132.858 - 15192.436: 98.4743% ( 8) 00:30:19.110 15192.436 - 15252.015: 98.5294% ( 6) 00:30:19.110 15252.015 - 15371.171: 98.6121% ( 9) 00:30:19.110 15371.171 - 15490.327: 98.6765% ( 7) 00:30:19.110 15490.327 - 15609.484: 98.7316% ( 6) 00:30:19.110 15609.484 - 15728.640: 98.7960% ( 7) 00:30:19.110 15728.640 - 15847.796: 98.8235% ( 3) 00:30:19.110 28597.527 - 28716.684: 98.8603% ( 4) 00:30:19.110 28716.684 - 28835.840: 98.8879% ( 3) 00:30:19.110 28835.840 - 28954.996: 98.9246% ( 4) 00:30:19.110 28954.996 - 29074.153: 98.9522% ( 3) 00:30:19.110 29074.153 - 29193.309: 98.9890% ( 4) 00:30:19.110 29193.309 - 29312.465: 99.0165% ( 3) 00:30:19.110 29312.465 - 29431.622: 99.0533% ( 4) 00:30:19.110 29431.622 - 29550.778: 99.0901% ( 4) 00:30:19.110 29550.778 - 29669.935: 99.1268% ( 4) 00:30:19.110 29669.935 - 29789.091: 99.1544% ( 3) 00:30:19.110 29789.091 - 29908.247: 99.1912% ( 4) 00:30:19.110 29908.247 - 30027.404: 99.2279% ( 4) 00:30:19.110 30027.404 - 30146.560: 99.2555% ( 3) 00:30:19.110 30146.560 - 30265.716: 99.2923% ( 4) 00:30:19.110 30265.716 - 30384.873: 99.3199% ( 3) 00:30:19.110 30384.873 - 30504.029: 99.3566% ( 4) 00:30:19.110 30504.029 - 30742.342: 99.4118% ( 6) 00:30:19.110 35985.222 - 36223.535: 99.4577% ( 5) 00:30:19.110 36223.535 - 36461.847: 99.5312% ( 8) 00:30:19.110 36461.847 - 36700.160: 99.5956% ( 7) 00:30:19.110 36700.160 - 36938.473: 99.6599% ( 7) 00:30:19.110 36938.473 - 37176.785: 99.7243% ( 7) 00:30:19.110 37176.785 - 37415.098: 99.7886% ( 7) 00:30:19.110 37415.098 - 37653.411: 99.8529% ( 7) 00:30:19.110 37653.411 - 37891.724: 99.9265% ( 8) 00:30:19.110 37891.724 - 38130.036: 99.9908% ( 7) 00:30:19.110 38130.036 - 38368.349: 100.0000% ( 1) 00:30:19.110 00:30:19.110 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:19.110 ============================================================================== 00:30:19.110 Range in us Cumulative IO count 00:30:19.110 9592.087 - 9651.665: 0.0092% ( 1) 00:30:19.110 9711.244 - 9770.822: 0.0276% ( 2) 00:30:19.110 9770.822 - 9830.400: 0.0643% ( 4) 00:30:19.110 9830.400 - 9889.978: 0.1287% ( 7) 00:30:19.110 9889.978 - 9949.556: 0.4044% ( 30) 00:30:19.110 9949.556 - 10009.135: 0.7353% ( 36) 00:30:19.110 10009.135 - 10068.713: 1.1949% ( 50) 00:30:19.110 10068.713 - 10128.291: 1.8934% ( 76) 00:30:19.110 10128.291 - 10187.869: 2.8768% ( 107) 00:30:19.110 10187.869 - 10247.447: 4.2096% ( 145) 00:30:19.110 10247.447 - 10307.025: 5.6342% ( 155) 00:30:19.110 10307.025 - 10366.604: 6.9118% ( 139) 00:30:19.110 10366.604 - 10426.182: 8.6029% ( 184) 00:30:19.110 10426.182 - 10485.760: 10.7996% ( 239) 00:30:19.110 10485.760 - 10545.338: 13.3456% ( 277) 00:30:19.110 10545.338 - 10604.916: 16.0662% ( 296) 00:30:19.110 10604.916 - 10664.495: 19.6599% ( 391) 00:30:19.110 10664.495 - 10724.073: 23.0790% ( 372) 00:30:19.110 10724.073 - 10783.651: 26.9393% ( 420) 00:30:19.110 10783.651 - 10843.229: 30.1562% ( 350) 00:30:19.110 10843.229 - 10902.807: 33.8695% ( 404) 00:30:19.110 10902.807 - 10962.385: 36.8290% ( 322) 00:30:19.110 10962.385 - 11021.964: 39.8713% ( 331) 00:30:19.110 11021.964 - 11081.542: 42.6287% ( 300) 00:30:19.110 11081.542 - 11141.120: 45.2757% ( 288) 00:30:19.110 11141.120 - 11200.698: 48.1158% ( 309) 00:30:19.110 11200.698 - 11260.276: 51.5533% ( 374) 00:30:19.110 11260.276 - 11319.855: 54.2923% ( 298) 00:30:19.110 11319.855 - 11379.433: 56.7923% ( 272) 00:30:19.110 11379.433 - 11439.011: 59.6232% ( 308) 00:30:19.110 11439.011 - 11498.589: 62.2426% ( 285) 00:30:19.110 11498.589 - 11558.167: 64.7335% ( 271) 00:30:19.110 11558.167 - 11617.745: 66.5165% ( 194) 00:30:19.110 11617.745 - 11677.324: 68.2537% ( 189) 00:30:19.110 11677.324 - 11736.902: 70.3309% ( 226) 00:30:19.110 11736.902 - 11796.480: 71.9945% ( 181) 00:30:19.110 11796.480 - 11856.058: 73.5110% ( 165) 00:30:19.110 11856.058 - 11915.636: 74.7794% ( 138) 00:30:19.110 11915.636 - 11975.215: 76.0938% ( 143) 00:30:19.110 11975.215 - 12034.793: 77.0496% ( 104) 00:30:19.110 12034.793 - 12094.371: 78.0239% ( 106) 00:30:19.110 12094.371 - 12153.949: 78.8419% ( 89) 00:30:19.110 12153.949 - 12213.527: 79.8805% ( 113) 00:30:19.110 12213.527 - 12273.105: 80.7537% ( 95) 00:30:19.110 12273.105 - 12332.684: 81.7096% ( 104) 00:30:19.110 12332.684 - 12392.262: 82.5092% ( 87) 00:30:19.110 12392.262 - 12451.840: 83.0239% ( 56) 00:30:19.110 12451.840 - 12511.418: 83.5478% ( 57) 00:30:19.110 12511.418 - 12570.996: 84.2004% ( 71) 00:30:19.110 12570.996 - 12630.575: 84.8346% ( 69) 00:30:19.110 12630.575 - 12690.153: 85.3493% ( 56) 00:30:19.110 12690.153 - 12749.731: 85.6801% ( 36) 00:30:19.110 12749.731 - 12809.309: 86.1305% ( 49) 00:30:19.110 12809.309 - 12868.887: 86.6912% ( 61) 00:30:19.110 12868.887 - 12928.465: 87.0588% ( 40) 00:30:19.110 12928.465 - 12988.044: 87.4724% ( 45) 00:30:19.110 12988.044 - 13047.622: 87.9504% ( 52) 00:30:19.110 13047.622 - 13107.200: 88.2812% ( 36) 00:30:19.110 13107.200 - 13166.778: 88.6121% ( 36) 00:30:19.110 13166.778 - 13226.356: 89.0349% ( 46) 00:30:19.110 13226.356 - 13285.935: 89.2923% ( 28) 00:30:19.110 13285.935 - 13345.513: 89.5404% ( 27) 00:30:19.110 13345.513 - 13405.091: 89.8621% ( 35) 00:30:19.110 13405.091 - 13464.669: 90.1746% ( 34) 00:30:19.110 13464.669 - 13524.247: 90.4320% ( 28) 00:30:19.110 13524.247 - 13583.825: 90.7261% ( 32) 00:30:19.110 13583.825 - 13643.404: 91.0662% ( 37) 00:30:19.110 13643.404 - 13702.982: 91.3971% ( 36) 00:30:19.110 13702.982 - 13762.560: 91.8566% ( 50) 00:30:19.110 13762.560 - 13822.138: 92.2059% ( 38) 00:30:19.110 13822.138 - 13881.716: 92.7022% ( 54) 00:30:19.110 13881.716 - 13941.295: 93.1342% ( 47) 00:30:19.110 13941.295 - 14000.873: 93.5386% ( 44) 00:30:19.110 14000.873 - 14060.451: 93.9522% ( 45) 00:30:19.110 14060.451 - 14120.029: 94.3290% ( 41) 00:30:19.110 14120.029 - 14179.607: 94.7243% ( 43) 00:30:19.110 14179.607 - 14239.185: 95.1011% ( 41) 00:30:19.110 14239.185 - 14298.764: 95.4779% ( 41) 00:30:19.110 14298.764 - 14358.342: 95.8272% ( 38) 00:30:19.110 14358.342 - 14417.920: 96.1489% ( 35) 00:30:19.110 14417.920 - 14477.498: 96.4522% ( 33) 00:30:19.110 14477.498 - 14537.076: 96.7555% ( 33) 00:30:19.110 14537.076 - 14596.655: 96.9945% ( 26) 00:30:19.110 14596.655 - 14656.233: 97.2702% ( 30) 00:30:19.110 14656.233 - 14715.811: 97.4724% ( 22) 00:30:19.110 14715.811 - 14775.389: 97.6654% ( 21) 00:30:19.110 14775.389 - 14834.967: 97.8401% ( 19) 00:30:19.110 14834.967 - 14894.545: 98.0147% ( 19) 00:30:19.110 14894.545 - 14954.124: 98.1434% ( 14) 00:30:19.110 14954.124 - 15013.702: 98.2904% ( 16) 00:30:19.110 15013.702 - 15073.280: 98.4007% ( 12) 00:30:19.110 15073.280 - 15132.858: 98.4835% ( 9) 00:30:19.110 15132.858 - 15192.436: 98.5386% ( 6) 00:30:19.110 15192.436 - 15252.015: 98.5938% ( 6) 00:30:19.110 15252.015 - 15371.171: 98.6949% ( 11) 00:30:19.110 15371.171 - 15490.327: 98.7224% ( 3) 00:30:19.110 15490.327 - 15609.484: 98.7500% ( 3) 00:30:19.110 15609.484 - 15728.640: 98.7776% ( 3) 00:30:19.110 15728.640 - 15847.796: 98.8143% ( 4) 00:30:19.110 15847.796 - 15966.953: 98.8235% ( 1) 00:30:19.110 26691.025 - 26810.182: 98.8419% ( 2) 00:30:19.111 26810.182 - 26929.338: 98.8695% ( 3) 00:30:19.111 26929.338 - 27048.495: 98.8971% ( 3) 00:30:19.111 27048.495 - 27167.651: 98.9246% ( 3) 00:30:19.111 27167.651 - 27286.807: 98.9614% ( 4) 00:30:19.111 27286.807 - 27405.964: 98.9982% ( 4) 00:30:19.111 27405.964 - 27525.120: 99.0257% ( 3) 00:30:19.111 27525.120 - 27644.276: 99.0533% ( 3) 00:30:19.111 27644.276 - 27763.433: 99.0809% ( 3) 00:30:19.111 27763.433 - 27882.589: 99.1085% ( 3) 00:30:19.111 27882.589 - 28001.745: 99.1360% ( 3) 00:30:19.111 28001.745 - 28120.902: 99.1636% ( 3) 00:30:19.111 28120.902 - 28240.058: 99.1912% ( 3) 00:30:19.111 28240.058 - 28359.215: 99.2279% ( 4) 00:30:19.111 28359.215 - 28478.371: 99.2555% ( 3) 00:30:19.111 28478.371 - 28597.527: 99.2923% ( 4) 00:30:19.111 28597.527 - 28716.684: 99.3290% ( 4) 00:30:19.111 28716.684 - 28835.840: 99.3566% ( 3) 00:30:19.111 28835.840 - 28954.996: 99.3934% ( 4) 00:30:19.111 28954.996 - 29074.153: 99.4118% ( 2) 00:30:19.111 34078.720 - 34317.033: 99.4577% ( 5) 00:30:19.111 34317.033 - 34555.345: 99.5221% ( 7) 00:30:19.111 34555.345 - 34793.658: 99.5772% ( 6) 00:30:19.111 34793.658 - 35031.971: 99.6415% ( 7) 00:30:19.111 35031.971 - 35270.284: 99.7059% ( 7) 00:30:19.111 35270.284 - 35508.596: 99.7702% ( 7) 00:30:19.111 35508.596 - 35746.909: 99.8346% ( 7) 00:30:19.111 35746.909 - 35985.222: 99.8989% ( 7) 00:30:19.111 35985.222 - 36223.535: 99.9724% ( 8) 00:30:19.111 36223.535 - 36461.847: 100.0000% ( 3) 00:30:19.111 00:30:19.111 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:19.111 ============================================================================== 00:30:19.111 Range in us Cumulative IO count 00:30:19.111 9711.244 - 9770.822: 0.0092% ( 1) 00:30:19.111 9770.822 - 9830.400: 0.0368% ( 3) 00:30:19.111 9830.400 - 9889.978: 0.1654% ( 14) 00:30:19.111 9889.978 - 9949.556: 0.3217% ( 17) 00:30:19.111 9949.556 - 10009.135: 0.6618% ( 37) 00:30:19.111 10009.135 - 10068.713: 1.1949% ( 58) 00:30:19.111 10068.713 - 10128.291: 1.7371% ( 59) 00:30:19.111 10128.291 - 10187.869: 2.4908% ( 82) 00:30:19.111 10187.869 - 10247.447: 3.2261% ( 80) 00:30:19.111 10247.447 - 10307.025: 4.3750% ( 125) 00:30:19.111 10307.025 - 10366.604: 5.6434% ( 138) 00:30:19.111 10366.604 - 10426.182: 7.1967% ( 169) 00:30:19.111 10426.182 - 10485.760: 9.2555% ( 224) 00:30:19.111 10485.760 - 10545.338: 11.9393% ( 292) 00:30:19.111 10545.338 - 10604.916: 15.5607% ( 394) 00:30:19.111 10604.916 - 10664.495: 19.2923% ( 406) 00:30:19.111 10664.495 - 10724.073: 22.6838% ( 369) 00:30:19.111 10724.073 - 10783.651: 26.8750% ( 456) 00:30:19.111 10783.651 - 10843.229: 30.7812% ( 425) 00:30:19.111 10843.229 - 10902.807: 34.4577% ( 400) 00:30:19.111 10902.807 - 10962.385: 37.4632% ( 327) 00:30:19.111 10962.385 - 11021.964: 40.4320% ( 323) 00:30:19.111 11021.964 - 11081.542: 43.4191% ( 325) 00:30:19.111 11081.542 - 11141.120: 45.9283% ( 273) 00:30:19.111 11141.120 - 11200.698: 48.6397% ( 295) 00:30:19.111 11200.698 - 11260.276: 51.0754% ( 265) 00:30:19.111 11260.276 - 11319.855: 53.6765% ( 283) 00:30:19.111 11319.855 - 11379.433: 56.2132% ( 276) 00:30:19.111 11379.433 - 11439.011: 58.9062% ( 293) 00:30:19.111 11439.011 - 11498.589: 61.3603% ( 267) 00:30:19.111 11498.589 - 11558.167: 63.5662% ( 240) 00:30:19.111 11558.167 - 11617.745: 65.5974% ( 221) 00:30:19.111 11617.745 - 11677.324: 67.9044% ( 251) 00:30:19.111 11677.324 - 11736.902: 70.0000% ( 228) 00:30:19.111 11736.902 - 11796.480: 71.8566% ( 202) 00:30:19.111 11796.480 - 11856.058: 73.5938% ( 189) 00:30:19.111 11856.058 - 11915.636: 75.1287% ( 167) 00:30:19.111 11915.636 - 11975.215: 76.4798% ( 147) 00:30:19.111 11975.215 - 12034.793: 77.9596% ( 161) 00:30:19.111 12034.793 - 12094.371: 79.1912% ( 134) 00:30:19.111 12094.371 - 12153.949: 80.1562% ( 105) 00:30:19.111 12153.949 - 12213.527: 80.9467% ( 86) 00:30:19.111 12213.527 - 12273.105: 81.8566% ( 99) 00:30:19.111 12273.105 - 12332.684: 82.5368% ( 74) 00:30:19.111 12332.684 - 12392.262: 83.2353% ( 76) 00:30:19.111 12392.262 - 12451.840: 83.7592% ( 57) 00:30:19.111 12451.840 - 12511.418: 84.2371% ( 52) 00:30:19.111 12511.418 - 12570.996: 84.7610% ( 57) 00:30:19.111 12570.996 - 12630.575: 85.2665% ( 55) 00:30:19.111 12630.575 - 12690.153: 85.7537% ( 53) 00:30:19.111 12690.153 - 12749.731: 86.2224% ( 51) 00:30:19.111 12749.731 - 12809.309: 86.5625% ( 37) 00:30:19.111 12809.309 - 12868.887: 86.8474% ( 31) 00:30:19.111 12868.887 - 12928.465: 87.2335% ( 42) 00:30:19.111 12928.465 - 12988.044: 87.6011% ( 40) 00:30:19.111 12988.044 - 13047.622: 88.0515% ( 49) 00:30:19.111 13047.622 - 13107.200: 88.3548% ( 33) 00:30:19.111 13107.200 - 13166.778: 88.6305% ( 30) 00:30:19.111 13166.778 - 13226.356: 88.8511% ( 24) 00:30:19.111 13226.356 - 13285.935: 89.0901% ( 26) 00:30:19.111 13285.935 - 13345.513: 89.2923% ( 22) 00:30:19.111 13345.513 - 13405.091: 89.5312% ( 26) 00:30:19.111 13405.091 - 13464.669: 89.7151% ( 20) 00:30:19.111 13464.669 - 13524.247: 89.9449% ( 25) 00:30:19.111 13524.247 - 13583.825: 90.2757% ( 36) 00:30:19.111 13583.825 - 13643.404: 90.6158% ( 37) 00:30:19.111 13643.404 - 13702.982: 91.0386% ( 46) 00:30:19.111 13702.982 - 13762.560: 91.4982% ( 50) 00:30:19.111 13762.560 - 13822.138: 91.9853% ( 53) 00:30:19.111 13822.138 - 13881.716: 92.3989% ( 45) 00:30:19.111 13881.716 - 13941.295: 92.7574% ( 39) 00:30:19.111 13941.295 - 14000.873: 93.3088% ( 60) 00:30:19.111 14000.873 - 14060.451: 93.8419% ( 58) 00:30:19.111 14060.451 - 14120.029: 94.2463% ( 44) 00:30:19.111 14120.029 - 14179.607: 94.6691% ( 46) 00:30:19.111 14179.607 - 14239.185: 95.0368% ( 40) 00:30:19.111 14239.185 - 14298.764: 95.3952% ( 39) 00:30:19.111 14298.764 - 14358.342: 95.7261% ( 36) 00:30:19.111 14358.342 - 14417.920: 96.0754% ( 38) 00:30:19.111 14417.920 - 14477.498: 96.3787% ( 33) 00:30:19.111 14477.498 - 14537.076: 96.6544% ( 30) 00:30:19.111 14537.076 - 14596.655: 96.9485% ( 32) 00:30:19.111 14596.655 - 14656.233: 97.1783% ( 25) 00:30:19.111 14656.233 - 14715.811: 97.4908% ( 34) 00:30:19.111 14715.811 - 14775.389: 97.6746% ( 20) 00:30:19.111 14775.389 - 14834.967: 97.8493% ( 19) 00:30:19.111 14834.967 - 14894.545: 97.9688% ( 13) 00:30:19.111 14894.545 - 14954.124: 98.1158% ( 16) 00:30:19.111 14954.124 - 15013.702: 98.2629% ( 16) 00:30:19.111 15013.702 - 15073.280: 98.3456% ( 9) 00:30:19.111 15073.280 - 15132.858: 98.4559% ( 12) 00:30:19.111 15132.858 - 15192.436: 98.5294% ( 8) 00:30:19.111 15192.436 - 15252.015: 98.5938% ( 7) 00:30:19.111 15252.015 - 15371.171: 98.6305% ( 4) 00:30:19.111 15371.171 - 15490.327: 98.6581% ( 3) 00:30:19.111 15490.327 - 15609.484: 98.6857% ( 3) 00:30:19.111 15609.484 - 15728.640: 98.7224% ( 4) 00:30:19.111 15728.640 - 15847.796: 98.7776% ( 6) 00:30:19.111 15847.796 - 15966.953: 98.8235% ( 5) 00:30:19.111 24665.367 - 24784.524: 98.8511% ( 3) 00:30:19.111 24784.524 - 24903.680: 98.8787% ( 3) 00:30:19.111 24903.680 - 25022.836: 98.9154% ( 4) 00:30:19.111 25022.836 - 25141.993: 98.9522% ( 4) 00:30:19.111 25141.993 - 25261.149: 98.9798% ( 3) 00:30:19.111 25261.149 - 25380.305: 99.0165% ( 4) 00:30:19.111 25380.305 - 25499.462: 99.0533% ( 4) 00:30:19.111 25499.462 - 25618.618: 99.0901% ( 4) 00:30:19.111 25618.618 - 25737.775: 99.1176% ( 3) 00:30:19.111 25737.775 - 25856.931: 99.1544% ( 4) 00:30:19.111 25856.931 - 25976.087: 99.1912% ( 4) 00:30:19.111 25976.087 - 26095.244: 99.2188% ( 3) 00:30:19.111 26095.244 - 26214.400: 99.2555% ( 4) 00:30:19.111 26214.400 - 26333.556: 99.2923% ( 4) 00:30:19.111 26333.556 - 26452.713: 99.3199% ( 3) 00:30:19.111 26452.713 - 26571.869: 99.3474% ( 3) 00:30:19.111 26571.869 - 26691.025: 99.3842% ( 4) 00:30:19.111 26691.025 - 26810.182: 99.4118% ( 3) 00:30:19.111 31457.280 - 31695.593: 99.4301% ( 2) 00:30:19.111 31695.593 - 31933.905: 99.4853% ( 6) 00:30:19.111 31933.905 - 32172.218: 99.5588% ( 8) 00:30:19.111 32172.218 - 32410.531: 99.6140% ( 6) 00:30:19.111 32410.531 - 32648.844: 99.6783% ( 7) 00:30:19.111 32648.844 - 32887.156: 99.7518% ( 8) 00:30:19.111 32887.156 - 33125.469: 99.8162% ( 7) 00:30:19.111 33125.469 - 33363.782: 99.8713% ( 6) 00:30:19.111 33363.782 - 33602.095: 99.9357% ( 7) 00:30:19.111 33602.095 - 33840.407: 100.0000% ( 7) 00:30:19.111 00:30:19.111 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:19.111 ============================================================================== 00:30:19.111 Range in us Cumulative IO count 00:30:19.111 9651.665 - 9711.244: 0.0092% ( 1) 00:30:19.111 9711.244 - 9770.822: 0.0184% ( 1) 00:30:19.111 9770.822 - 9830.400: 0.0368% ( 2) 00:30:19.111 9830.400 - 9889.978: 0.1379% ( 11) 00:30:19.111 9889.978 - 9949.556: 0.3217% ( 20) 00:30:19.111 9949.556 - 10009.135: 0.6250% ( 33) 00:30:19.111 10009.135 - 10068.713: 1.0386% ( 45) 00:30:19.112 10068.713 - 10128.291: 1.4338% ( 43) 00:30:19.112 10128.291 - 10187.869: 2.0588% ( 68) 00:30:19.112 10187.869 - 10247.447: 2.6838% ( 68) 00:30:19.112 10247.447 - 10307.025: 3.7776% ( 119) 00:30:19.112 10307.025 - 10366.604: 5.0735% ( 141) 00:30:19.112 10366.604 - 10426.182: 7.1415% ( 225) 00:30:19.112 10426.182 - 10485.760: 9.4118% ( 247) 00:30:19.112 10485.760 - 10545.338: 12.1599% ( 299) 00:30:19.112 10545.338 - 10604.916: 15.3768% ( 350) 00:30:19.112 10604.916 - 10664.495: 19.3290% ( 430) 00:30:19.112 10664.495 - 10724.073: 23.4651% ( 450) 00:30:19.112 10724.073 - 10783.651: 27.3438% ( 422) 00:30:19.112 10783.651 - 10843.229: 30.9926% ( 397) 00:30:19.112 10843.229 - 10902.807: 34.6140% ( 394) 00:30:19.112 10902.807 - 10962.385: 37.6746% ( 333) 00:30:19.112 10962.385 - 11021.964: 40.5790% ( 316) 00:30:19.112 11021.964 - 11081.542: 43.5294% ( 321) 00:30:19.112 11081.542 - 11141.120: 46.3235% ( 304) 00:30:19.112 11141.120 - 11200.698: 48.7684% ( 266) 00:30:19.112 11200.698 - 11260.276: 51.7555% ( 325) 00:30:19.112 11260.276 - 11319.855: 54.2096% ( 267) 00:30:19.112 11319.855 - 11379.433: 56.9301% ( 296) 00:30:19.112 11379.433 - 11439.011: 59.1544% ( 242) 00:30:19.112 11439.011 - 11498.589: 61.6912% ( 276) 00:30:19.112 11498.589 - 11558.167: 63.9246% ( 243) 00:30:19.112 11558.167 - 11617.745: 65.9191% ( 217) 00:30:19.112 11617.745 - 11677.324: 68.3548% ( 265) 00:30:19.112 11677.324 - 11736.902: 70.5423% ( 238) 00:30:19.112 11736.902 - 11796.480: 72.2151% ( 182) 00:30:19.112 11796.480 - 11856.058: 73.5202% ( 142) 00:30:19.112 11856.058 - 11915.636: 74.8438% ( 144) 00:30:19.112 11915.636 - 11975.215: 76.1121% ( 138) 00:30:19.112 11975.215 - 12034.793: 77.1140% ( 109) 00:30:19.112 12034.793 - 12094.371: 78.1985% ( 118) 00:30:19.112 12094.371 - 12153.949: 79.2555% ( 115) 00:30:19.112 12153.949 - 12213.527: 80.1930% ( 102) 00:30:19.112 12213.527 - 12273.105: 81.1121% ( 100) 00:30:19.112 12273.105 - 12332.684: 81.9301% ( 89) 00:30:19.112 12332.684 - 12392.262: 82.7665% ( 91) 00:30:19.112 12392.262 - 12451.840: 83.4651% ( 76) 00:30:19.112 12451.840 - 12511.418: 84.1085% ( 70) 00:30:19.112 12511.418 - 12570.996: 84.6140% ( 55) 00:30:19.112 12570.996 - 12630.575: 85.0276% ( 45) 00:30:19.112 12630.575 - 12690.153: 85.4871% ( 50) 00:30:19.112 12690.153 - 12749.731: 85.8548% ( 40) 00:30:19.112 12749.731 - 12809.309: 86.2040% ( 38) 00:30:19.112 12809.309 - 12868.887: 86.5257% ( 35) 00:30:19.112 12868.887 - 12928.465: 86.8566% ( 36) 00:30:19.112 12928.465 - 12988.044: 87.1232% ( 29) 00:30:19.112 12988.044 - 13047.622: 87.4632% ( 37) 00:30:19.112 13047.622 - 13107.200: 87.8033% ( 37) 00:30:19.112 13107.200 - 13166.778: 88.0699% ( 29) 00:30:19.112 13166.778 - 13226.356: 88.3088% ( 26) 00:30:19.112 13226.356 - 13285.935: 88.6029% ( 32) 00:30:19.112 13285.935 - 13345.513: 88.8787% ( 30) 00:30:19.112 13345.513 - 13405.091: 89.0809% ( 22) 00:30:19.112 13405.091 - 13464.669: 89.3566% ( 30) 00:30:19.112 13464.669 - 13524.247: 89.6967% ( 37) 00:30:19.112 13524.247 - 13583.825: 89.9632% ( 29) 00:30:19.112 13583.825 - 13643.404: 90.3952% ( 47) 00:30:19.112 13643.404 - 13702.982: 90.8364% ( 48) 00:30:19.112 13702.982 - 13762.560: 91.3327% ( 54) 00:30:19.112 13762.560 - 13822.138: 91.9118% ( 63) 00:30:19.112 13822.138 - 13881.716: 92.4449% ( 58) 00:30:19.112 13881.716 - 13941.295: 92.8768% ( 47) 00:30:19.112 13941.295 - 14000.873: 93.3364% ( 50) 00:30:19.112 14000.873 - 14060.451: 93.8143% ( 52) 00:30:19.112 14060.451 - 14120.029: 94.2647% ( 49) 00:30:19.112 14120.029 - 14179.607: 94.6967% ( 47) 00:30:19.112 14179.607 - 14239.185: 95.0919% ( 43) 00:30:19.112 14239.185 - 14298.764: 95.4688% ( 41) 00:30:19.112 14298.764 - 14358.342: 95.8364% ( 40) 00:30:19.112 14358.342 - 14417.920: 96.1857% ( 38) 00:30:19.112 14417.920 - 14477.498: 96.4706% ( 31) 00:30:19.112 14477.498 - 14537.076: 96.7831% ( 34) 00:30:19.112 14537.076 - 14596.655: 97.0956% ( 34) 00:30:19.112 14596.655 - 14656.233: 97.3621% ( 29) 00:30:19.112 14656.233 - 14715.811: 97.5735% ( 23) 00:30:19.112 14715.811 - 14775.389: 97.7757% ( 22) 00:30:19.112 14775.389 - 14834.967: 97.9596% ( 20) 00:30:19.112 14834.967 - 14894.545: 98.0974% ( 15) 00:30:19.112 14894.545 - 14954.124: 98.2629% ( 18) 00:30:19.112 14954.124 - 15013.702: 98.3640% ( 11) 00:30:19.112 15013.702 - 15073.280: 98.4559% ( 10) 00:30:19.112 15073.280 - 15132.858: 98.5386% ( 9) 00:30:19.112 15132.858 - 15192.436: 98.5846% ( 5) 00:30:19.112 15192.436 - 15252.015: 98.6121% ( 3) 00:30:19.112 15252.015 - 15371.171: 98.6489% ( 4) 00:30:19.112 15371.171 - 15490.327: 98.6949% ( 5) 00:30:19.112 15490.327 - 15609.484: 98.7408% ( 5) 00:30:19.112 15609.484 - 15728.640: 98.7776% ( 4) 00:30:19.112 15728.640 - 15847.796: 98.8235% ( 5) 00:30:19.112 22282.240 - 22401.396: 98.8511% ( 3) 00:30:19.112 22401.396 - 22520.553: 98.8787% ( 3) 00:30:19.112 22520.553 - 22639.709: 98.9154% ( 4) 00:30:19.112 22639.709 - 22758.865: 98.9522% ( 4) 00:30:19.112 22758.865 - 22878.022: 98.9890% ( 4) 00:30:19.112 22878.022 - 22997.178: 99.0257% ( 4) 00:30:19.112 22997.178 - 23116.335: 99.0533% ( 3) 00:30:19.112 23116.335 - 23235.491: 99.0901% ( 4) 00:30:19.112 23235.491 - 23354.647: 99.1268% ( 4) 00:30:19.112 23354.647 - 23473.804: 99.1636% ( 4) 00:30:19.112 23473.804 - 23592.960: 99.1912% ( 3) 00:30:19.112 23592.960 - 23712.116: 99.2188% ( 3) 00:30:19.112 23712.116 - 23831.273: 99.2555% ( 4) 00:30:19.112 23831.273 - 23950.429: 99.2923% ( 4) 00:30:19.112 23950.429 - 24069.585: 99.3290% ( 4) 00:30:19.112 24069.585 - 24188.742: 99.3566% ( 3) 00:30:19.112 24188.742 - 24307.898: 99.3934% ( 4) 00:30:19.112 24307.898 - 24427.055: 99.4118% ( 2) 00:30:19.112 29193.309 - 29312.465: 99.4393% ( 3) 00:30:19.112 29312.465 - 29431.622: 99.4669% ( 3) 00:30:19.112 29431.622 - 29550.778: 99.5037% ( 4) 00:30:19.112 29550.778 - 29669.935: 99.5312% ( 3) 00:30:19.112 29669.935 - 29789.091: 99.5588% ( 3) 00:30:19.112 29789.091 - 29908.247: 99.5864% ( 3) 00:30:19.112 29908.247 - 30027.404: 99.6232% ( 4) 00:30:19.112 30027.404 - 30146.560: 99.6599% ( 4) 00:30:19.112 30146.560 - 30265.716: 99.6875% ( 3) 00:30:19.112 30265.716 - 30384.873: 99.7243% ( 4) 00:30:19.112 30384.873 - 30504.029: 99.7610% ( 4) 00:30:19.112 30504.029 - 30742.342: 99.8254% ( 7) 00:30:19.112 30742.342 - 30980.655: 99.8897% ( 7) 00:30:19.112 30980.655 - 31218.967: 99.9540% ( 7) 00:30:19.112 31218.967 - 31457.280: 100.0000% ( 5) 00:30:19.112 00:30:19.112 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:19.112 ============================================================================== 00:30:19.112 Range in us Cumulative IO count 00:30:19.112 9711.244 - 9770.822: 0.0091% ( 1) 00:30:19.112 9770.822 - 9830.400: 0.0640% ( 6) 00:30:19.112 9830.400 - 9889.978: 0.1188% ( 6) 00:30:19.112 9889.978 - 9949.556: 0.3381% ( 24) 00:30:19.112 9949.556 - 10009.135: 0.6213% ( 31) 00:30:19.112 10009.135 - 10068.713: 0.8863% ( 29) 00:30:19.112 10068.713 - 10128.291: 1.4072% ( 57) 00:30:19.112 10128.291 - 10187.869: 2.0376% ( 69) 00:30:19.112 10187.869 - 10247.447: 3.1981% ( 127) 00:30:19.112 10247.447 - 10307.025: 4.2672% ( 117) 00:30:19.112 10307.025 - 10366.604: 5.7383% ( 161) 00:30:19.112 10366.604 - 10426.182: 7.7942% ( 225) 00:30:19.112 10426.182 - 10485.760: 9.7496% ( 214) 00:30:19.112 10485.760 - 10545.338: 12.4178% ( 292) 00:30:19.112 10545.338 - 10604.916: 15.3052% ( 316) 00:30:19.112 10604.916 - 10664.495: 18.8322% ( 386) 00:30:19.112 10664.495 - 10724.073: 23.0355% ( 460) 00:30:19.112 10724.073 - 10783.651: 26.7361% ( 405) 00:30:19.112 10783.651 - 10843.229: 30.2449% ( 384) 00:30:19.112 10843.229 - 10902.807: 33.7993% ( 389) 00:30:19.112 10902.807 - 10962.385: 37.4635% ( 401) 00:30:19.112 10962.385 - 11021.964: 40.6250% ( 346) 00:30:19.112 11021.964 - 11081.542: 43.8414% ( 352) 00:30:19.112 11081.542 - 11141.120: 46.4730% ( 288) 00:30:19.112 11141.120 - 11200.698: 49.2142% ( 300) 00:30:19.112 11200.698 - 11260.276: 52.4854% ( 358) 00:30:19.112 11260.276 - 11319.855: 54.8702% ( 261) 00:30:19.112 11319.855 - 11379.433: 57.9678% ( 339) 00:30:19.112 11379.433 - 11439.011: 60.9558% ( 327) 00:30:19.112 11439.011 - 11498.589: 62.9569% ( 219) 00:30:19.112 11498.589 - 11558.167: 65.0585% ( 230) 00:30:19.112 11558.167 - 11617.745: 67.1875% ( 233) 00:30:19.112 11617.745 - 11677.324: 69.0150% ( 200) 00:30:19.112 11677.324 - 11736.902: 70.9338% ( 210) 00:30:19.112 11736.902 - 11796.480: 72.5055% ( 172) 00:30:19.112 11796.480 - 11856.058: 74.0588% ( 170) 00:30:19.112 11856.058 - 11915.636: 75.2924% ( 135) 00:30:19.112 11915.636 - 11975.215: 76.5351% ( 136) 00:30:19.112 11975.215 - 12034.793: 77.4123% ( 96) 00:30:19.112 12034.793 - 12094.371: 78.0610% ( 71) 00:30:19.112 12094.371 - 12153.949: 78.7555% ( 76) 00:30:19.112 12153.949 - 12213.527: 79.7332% ( 107) 00:30:19.112 12213.527 - 12273.105: 80.4550% ( 79) 00:30:19.112 12273.105 - 12332.684: 81.0398% ( 64) 00:30:19.112 12332.684 - 12392.262: 81.8988% ( 94) 00:30:19.112 12392.262 - 12451.840: 82.6937% ( 87) 00:30:19.112 12451.840 - 12511.418: 83.4887% ( 87) 00:30:19.112 12511.418 - 12570.996: 83.9821% ( 54) 00:30:19.112 12570.996 - 12630.575: 84.7314% ( 82) 00:30:19.112 12630.575 - 12690.153: 85.2431% ( 56) 00:30:19.112 12690.153 - 12749.731: 85.7091% ( 51) 00:30:19.112 12749.731 - 12809.309: 85.9466% ( 26) 00:30:19.112 12809.309 - 12868.887: 86.2025% ( 28) 00:30:19.112 12868.887 - 12928.465: 86.4401% ( 26) 00:30:19.112 12928.465 - 12988.044: 86.7873% ( 38) 00:30:19.112 12988.044 - 13047.622: 87.2898% ( 55) 00:30:19.112 13047.622 - 13107.200: 87.5640% ( 30) 00:30:19.112 13107.200 - 13166.778: 87.8564% ( 32) 00:30:19.112 13166.778 - 13226.356: 88.1853% ( 36) 00:30:19.112 13226.356 - 13285.935: 88.6056% ( 46) 00:30:19.112 13285.935 - 13345.513: 88.8706% ( 29) 00:30:19.112 13345.513 - 13405.091: 89.1813% ( 34) 00:30:19.112 13405.091 - 13464.669: 89.4554% ( 30) 00:30:19.112 13464.669 - 13524.247: 89.8940% ( 48) 00:30:19.112 13524.247 - 13583.825: 90.2595% ( 40) 00:30:19.112 13583.825 - 13643.404: 90.6890% ( 47) 00:30:19.112 13643.404 - 13702.982: 91.1458% ( 50) 00:30:19.113 13702.982 - 13762.560: 91.5296% ( 42) 00:30:19.113 13762.560 - 13822.138: 91.8311% ( 33) 00:30:19.113 13822.138 - 13881.716: 92.1784% ( 38) 00:30:19.113 13881.716 - 13941.295: 92.6078% ( 47) 00:30:19.113 13941.295 - 14000.873: 92.9916% ( 42) 00:30:19.113 14000.873 - 14060.451: 93.3936% ( 44) 00:30:19.113 14060.451 - 14120.029: 93.6952% ( 33) 00:30:19.113 14120.029 - 14179.607: 94.0333% ( 37) 00:30:19.113 14179.607 - 14239.185: 94.4353% ( 44) 00:30:19.113 14239.185 - 14298.764: 94.8282% ( 43) 00:30:19.113 14298.764 - 14358.342: 95.2303% ( 44) 00:30:19.113 14358.342 - 14417.920: 95.7145% ( 53) 00:30:19.113 14417.920 - 14477.498: 96.2171% ( 55) 00:30:19.113 14477.498 - 14537.076: 96.6374% ( 46) 00:30:19.113 14537.076 - 14596.655: 96.9115% ( 30) 00:30:19.113 14596.655 - 14656.233: 97.2679% ( 39) 00:30:19.113 14656.233 - 14715.811: 97.5969% ( 36) 00:30:19.113 14715.811 - 14775.389: 97.8253% ( 25) 00:30:19.113 14775.389 - 14834.967: 97.9989% ( 19) 00:30:19.113 14834.967 - 14894.545: 98.1634% ( 18) 00:30:19.113 14894.545 - 14954.124: 98.2913% ( 14) 00:30:19.113 14954.124 - 15013.702: 98.3918% ( 11) 00:30:19.113 15013.702 - 15073.280: 98.5015% ( 12) 00:30:19.113 15073.280 - 15132.858: 98.6568% ( 17) 00:30:19.113 15132.858 - 15192.436: 98.7482% ( 10) 00:30:19.113 15192.436 - 15252.015: 98.8213% ( 8) 00:30:19.113 15252.015 - 15371.171: 98.9583% ( 15) 00:30:19.113 15371.171 - 15490.327: 99.0680% ( 12) 00:30:19.113 15490.327 - 15609.484: 99.1594% ( 10) 00:30:19.113 15609.484 - 15728.640: 99.2416% ( 9) 00:30:19.113 15728.640 - 15847.796: 99.2781% ( 4) 00:30:19.113 15847.796 - 15966.953: 99.3056% ( 3) 00:30:19.113 15966.953 - 16086.109: 99.3421% ( 4) 00:30:19.113 16086.109 - 16205.265: 99.3695% ( 3) 00:30:19.113 16205.265 - 16324.422: 99.4061% ( 4) 00:30:19.113 16324.422 - 16443.578: 99.4152% ( 1) 00:30:19.113 21448.145 - 21567.302: 99.4518% ( 4) 00:30:19.113 21567.302 - 21686.458: 99.4792% ( 3) 00:30:19.113 21686.458 - 21805.615: 99.5157% ( 4) 00:30:19.113 21805.615 - 21924.771: 99.5523% ( 4) 00:30:19.113 21924.771 - 22043.927: 99.5888% ( 4) 00:30:19.113 22043.927 - 22163.084: 99.6162% ( 3) 00:30:19.113 22163.084 - 22282.240: 99.6436% ( 3) 00:30:19.113 22282.240 - 22401.396: 99.6802% ( 4) 00:30:19.113 22401.396 - 22520.553: 99.7167% ( 4) 00:30:19.113 22520.553 - 22639.709: 99.7533% ( 4) 00:30:19.113 22639.709 - 22758.865: 99.7807% ( 3) 00:30:19.113 22758.865 - 22878.022: 99.8173% ( 4) 00:30:19.113 22878.022 - 22997.178: 99.8447% ( 3) 00:30:19.113 22997.178 - 23116.335: 99.8812% ( 4) 00:30:19.113 23116.335 - 23235.491: 99.9086% ( 3) 00:30:19.113 23235.491 - 23354.647: 99.9452% ( 4) 00:30:19.113 23354.647 - 23473.804: 99.9817% ( 4) 00:30:19.113 23473.804 - 23592.960: 100.0000% ( 2) 00:30:19.113 00:30:19.113 ************************************ 00:30:19.113 END TEST nvme_perf 00:30:19.113 ************************************ 00:30:19.113 01:59:27 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:19.113 00:30:19.113 real 0m2.745s 00:30:19.113 user 0m2.299s 00:30:19.113 sys 0m0.330s 00:30:19.113 01:59:27 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.113 01:59:27 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:30:19.113 01:59:27 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:19.113 01:59:27 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:19.113 01:59:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:19.113 01:59:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:19.113 ************************************ 00:30:19.113 START TEST nvme_hello_world 00:30:19.113 ************************************ 00:30:19.113 01:59:27 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:19.371 Initializing NVMe Controllers 00:30:19.371 Attached to 0000:00:10.0 00:30:19.371 Namespace ID: 1 size: 6GB 00:30:19.371 Attached to 0000:00:11.0 00:30:19.371 Namespace ID: 1 size: 5GB 00:30:19.371 Attached to 0000:00:13.0 00:30:19.371 Namespace ID: 1 size: 1GB 00:30:19.371 Attached to 0000:00:12.0 00:30:19.371 Namespace ID: 1 size: 4GB 00:30:19.371 Namespace ID: 2 size: 4GB 00:30:19.371 Namespace ID: 3 size: 4GB 00:30:19.371 Initialization complete. 00:30:19.371 INFO: using host memory buffer for IO 00:30:19.371 Hello world! 00:30:19.371 INFO: using host memory buffer for IO 00:30:19.371 Hello world! 00:30:19.371 INFO: using host memory buffer for IO 00:30:19.371 Hello world! 00:30:19.371 INFO: using host memory buffer for IO 00:30:19.371 Hello world! 00:30:19.371 INFO: using host memory buffer for IO 00:30:19.371 Hello world! 00:30:19.371 INFO: using host memory buffer for IO 00:30:19.371 Hello world! 00:30:19.371 ************************************ 00:30:19.371 END TEST nvme_hello_world 00:30:19.371 ************************************ 00:30:19.371 00:30:19.371 real 0m0.331s 00:30:19.371 user 0m0.129s 00:30:19.371 sys 0m0.157s 00:30:19.371 01:59:28 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.371 01:59:28 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:30:19.371 01:59:28 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:19.371 01:59:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:19.371 01:59:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:19.371 01:59:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:19.371 ************************************ 00:30:19.371 START TEST nvme_sgl 00:30:19.371 ************************************ 00:30:19.371 01:59:28 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:19.937 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:30:19.938 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:30:19.938 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:30:19.938 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:30:19.938 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:30:19.938 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:30:19.938 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:30:19.938 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:30:19.938 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:30:19.938 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:30:19.938 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:30:19.938 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:30:19.938 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:30:19.938 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:30:19.938 NVMe Readv/Writev Request test 00:30:19.938 Attached to 0000:00:10.0 00:30:19.938 Attached to 0000:00:11.0 00:30:19.938 Attached to 0000:00:13.0 00:30:19.938 Attached to 0000:00:12.0 00:30:19.938 0000:00:10.0: build_io_request_2 test passed 00:30:19.938 0000:00:10.0: build_io_request_4 test passed 00:30:19.938 0000:00:10.0: build_io_request_5 test passed 00:30:19.938 0000:00:10.0: build_io_request_6 test passed 00:30:19.938 0000:00:10.0: build_io_request_7 test passed 00:30:19.938 0000:00:10.0: build_io_request_10 test passed 00:30:19.938 0000:00:11.0: build_io_request_2 test passed 00:30:19.938 0000:00:11.0: build_io_request_4 test passed 00:30:19.938 0000:00:11.0: build_io_request_5 test passed 00:30:19.938 0000:00:11.0: build_io_request_6 test passed 00:30:19.938 0000:00:11.0: build_io_request_7 test passed 00:30:19.938 0000:00:11.0: build_io_request_10 test passed 00:30:19.938 Cleaning up... 00:30:19.938 00:30:19.938 real 0m0.449s 00:30:19.938 user 0m0.220s 00:30:19.938 sys 0m0.168s 00:30:19.938 01:59:28 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.938 01:59:28 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:30:19.938 ************************************ 00:30:19.938 END TEST nvme_sgl 00:30:19.938 ************************************ 00:30:19.938 01:59:28 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:19.938 01:59:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:19.938 01:59:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:19.938 01:59:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:19.938 ************************************ 00:30:19.938 START TEST nvme_e2edp 00:30:19.938 ************************************ 00:30:19.938 01:59:28 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:20.196 NVMe Write/Read with End-to-End data protection test 00:30:20.196 Attached to 0000:00:10.0 00:30:20.196 Attached to 0000:00:11.0 00:30:20.196 Attached to 0000:00:13.0 00:30:20.196 Attached to 0000:00:12.0 00:30:20.196 Cleaning up... 00:30:20.196 00:30:20.196 real 0m0.339s 00:30:20.196 user 0m0.120s 00:30:20.196 sys 0m0.163s 00:30:20.454 ************************************ 00:30:20.454 END TEST nvme_e2edp 00:30:20.454 ************************************ 00:30:20.454 01:59:29 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:20.454 01:59:29 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:30:20.454 01:59:29 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:20.454 01:59:29 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:20.454 01:59:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:20.454 01:59:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:20.454 ************************************ 00:30:20.454 START TEST nvme_reserve 00:30:20.454 ************************************ 00:30:20.454 01:59:29 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:20.722 ===================================================== 00:30:20.722 NVMe Controller at PCI bus 0, device 16, function 0 00:30:20.722 ===================================================== 00:30:20.722 Reservations: Not Supported 00:30:20.722 ===================================================== 00:30:20.722 NVMe Controller at PCI bus 0, device 17, function 0 00:30:20.722 ===================================================== 00:30:20.722 Reservations: Not Supported 00:30:20.722 ===================================================== 00:30:20.722 NVMe Controller at PCI bus 0, device 19, function 0 00:30:20.722 ===================================================== 00:30:20.722 Reservations: Not Supported 00:30:20.722 ===================================================== 00:30:20.722 NVMe Controller at PCI bus 0, device 18, function 0 00:30:20.722 ===================================================== 00:30:20.722 Reservations: Not Supported 00:30:20.722 Reservation test passed 00:30:20.722 ************************************ 00:30:20.722 END TEST nvme_reserve 00:30:20.722 ************************************ 00:30:20.722 00:30:20.722 real 0m0.356s 00:30:20.722 user 0m0.127s 00:30:20.722 sys 0m0.178s 00:30:20.722 01:59:29 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:20.722 01:59:29 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:30:20.722 01:59:29 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:20.722 01:59:29 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:20.722 01:59:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:20.722 01:59:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:20.722 ************************************ 00:30:20.722 START TEST nvme_err_injection 00:30:20.722 ************************************ 00:30:20.722 01:59:29 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:21.042 NVMe Error Injection test 00:30:21.042 Attached to 0000:00:10.0 00:30:21.042 Attached to 0000:00:11.0 00:30:21.042 Attached to 0000:00:13.0 00:30:21.042 Attached to 0000:00:12.0 00:30:21.042 0000:00:10.0: get features failed as expected 00:30:21.042 0000:00:11.0: get features failed as expected 00:30:21.042 0000:00:13.0: get features failed as expected 00:30:21.042 0000:00:12.0: get features failed as expected 00:30:21.042 0000:00:13.0: get features successfully as expected 00:30:21.042 0000:00:12.0: get features successfully as expected 00:30:21.042 0000:00:10.0: get features successfully as expected 00:30:21.042 0000:00:11.0: get features successfully as expected 00:30:21.042 0000:00:12.0: read failed as expected 00:30:21.042 0000:00:10.0: read failed as expected 00:30:21.042 0000:00:11.0: read failed as expected 00:30:21.042 0000:00:13.0: read failed as expected 00:30:21.042 0000:00:10.0: read successfully as expected 00:30:21.042 0000:00:11.0: read successfully as expected 00:30:21.042 0000:00:13.0: read successfully as expected 00:30:21.042 0000:00:12.0: read successfully as expected 00:30:21.042 Cleaning up... 00:30:21.042 00:30:21.042 real 0m0.364s 00:30:21.042 user 0m0.139s 00:30:21.042 sys 0m0.176s 00:30:21.042 01:59:30 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:21.042 ************************************ 00:30:21.042 01:59:30 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:30:21.042 END TEST nvme_err_injection 00:30:21.042 ************************************ 00:30:21.300 01:59:30 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:21.300 01:59:30 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:30:21.300 01:59:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:21.300 01:59:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:21.300 ************************************ 00:30:21.300 START TEST nvme_overhead 00:30:21.300 ************************************ 00:30:21.300 01:59:30 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:22.676 Initializing NVMe Controllers 00:30:22.676 Attached to 0000:00:10.0 00:30:22.676 Attached to 0000:00:11.0 00:30:22.676 Attached to 0000:00:13.0 00:30:22.676 Attached to 0000:00:12.0 00:30:22.676 Initialization complete. Launching workers. 00:30:22.676 submit (in ns) avg, min, max = 16645.4, 13241.4, 79364.1 00:30:22.676 complete (in ns) avg, min, max = 10811.9, 9410.0, 187530.0 00:30:22.676 00:30:22.676 Submit histogram 00:30:22.676 ================ 00:30:22.676 Range in us Cumulative Count 00:30:22.676 13.207 - 13.265: 0.0104% ( 1) 00:30:22.676 13.440 - 13.498: 0.0208% ( 1) 00:30:22.676 13.498 - 13.556: 0.0312% ( 1) 00:30:22.676 14.022 - 14.080: 0.0416% ( 1) 00:30:22.676 14.429 - 14.487: 0.0521% ( 1) 00:30:22.676 14.487 - 14.545: 0.0625% ( 1) 00:30:22.676 14.662 - 14.720: 0.1145% ( 5) 00:30:22.676 14.720 - 14.778: 0.3019% ( 18) 00:30:22.676 14.778 - 14.836: 0.8953% ( 57) 00:30:22.676 14.836 - 14.895: 2.4568% ( 150) 00:30:22.676 14.895 - 15.011: 11.0764% ( 828) 00:30:22.676 15.011 - 15.127: 24.5472% ( 1294) 00:30:22.676 15.127 - 15.244: 37.7993% ( 1273) 00:30:22.676 15.244 - 15.360: 47.4599% ( 928) 00:30:22.676 15.360 - 15.476: 54.8928% ( 714) 00:30:22.676 15.476 - 15.593: 60.2436% ( 514) 00:30:22.676 15.593 - 15.709: 63.9288% ( 354) 00:30:22.676 15.709 - 15.825: 66.6042% ( 257) 00:30:22.676 15.825 - 15.942: 68.4780% ( 180) 00:30:22.676 15.942 - 16.058: 69.6856% ( 116) 00:30:22.676 16.058 - 16.175: 70.4560% ( 74) 00:30:22.676 16.175 - 16.291: 71.0806% ( 60) 00:30:22.676 16.291 - 16.407: 71.4449% ( 35) 00:30:22.676 16.407 - 16.524: 71.7364% ( 28) 00:30:22.676 16.524 - 16.640: 71.9550% ( 21) 00:30:22.676 16.640 - 16.756: 72.1945% ( 23) 00:30:22.676 16.756 - 16.873: 72.4235% ( 22) 00:30:22.676 16.873 - 16.989: 72.6837% ( 25) 00:30:22.676 16.989 - 17.105: 72.9544% ( 26) 00:30:22.676 17.105 - 17.222: 73.4333% ( 46) 00:30:22.676 17.222 - 17.338: 73.9642% ( 51) 00:30:22.676 17.338 - 17.455: 74.3077% ( 33) 00:30:22.676 17.455 - 17.571: 74.6617% ( 34) 00:30:22.676 17.571 - 17.687: 74.9636% ( 29) 00:30:22.676 17.687 - 17.804: 75.1509% ( 18) 00:30:22.676 17.804 - 17.920: 75.3071% ( 15) 00:30:22.676 17.920 - 18.036: 75.5257% ( 21) 00:30:22.676 18.036 - 18.153: 76.1711% ( 62) 00:30:22.676 18.153 - 18.269: 77.4516% ( 123) 00:30:22.676 18.269 - 18.385: 79.5336% ( 200) 00:30:22.676 18.385 - 18.502: 82.0008% ( 237) 00:30:22.676 18.502 - 18.618: 84.8949% ( 278) 00:30:22.676 18.618 - 18.735: 87.0602% ( 208) 00:30:22.676 18.735 - 18.851: 88.4343% ( 132) 00:30:22.676 18.851 - 18.967: 89.5794% ( 110) 00:30:22.676 18.967 - 19.084: 90.3706% ( 76) 00:30:22.676 19.084 - 19.200: 90.8807% ( 49) 00:30:22.676 19.200 - 19.316: 91.3492% ( 45) 00:30:22.676 19.316 - 19.433: 91.6719% ( 31) 00:30:22.676 19.433 - 19.549: 91.9529% ( 27) 00:30:22.676 19.549 - 19.665: 92.2548% ( 29) 00:30:22.676 19.665 - 19.782: 92.5047% ( 24) 00:30:22.676 19.782 - 19.898: 92.6088% ( 10) 00:30:22.676 19.898 - 20.015: 92.7753% ( 16) 00:30:22.676 20.015 - 20.131: 92.8795% ( 10) 00:30:22.676 20.131 - 20.247: 92.9731% ( 9) 00:30:22.676 20.247 - 20.364: 93.0668% ( 9) 00:30:22.676 20.364 - 20.480: 93.1501% ( 8) 00:30:22.676 20.480 - 20.596: 93.2646% ( 11) 00:30:22.676 20.596 - 20.713: 93.3479% ( 8) 00:30:22.676 20.713 - 20.829: 93.5353% ( 18) 00:30:22.676 20.829 - 20.945: 93.6186% ( 8) 00:30:22.676 20.945 - 21.062: 93.6498% ( 3) 00:30:22.676 21.062 - 21.178: 93.8268% ( 17) 00:30:22.676 21.178 - 21.295: 93.9205% ( 9) 00:30:22.676 21.295 - 21.411: 94.0350% ( 11) 00:30:22.676 21.411 - 21.527: 94.2120% ( 17) 00:30:22.676 21.527 - 21.644: 94.3369% ( 12) 00:30:22.676 21.644 - 21.760: 94.4930% ( 15) 00:30:22.676 21.760 - 21.876: 94.6179% ( 12) 00:30:22.676 21.876 - 21.993: 94.7429% ( 12) 00:30:22.676 21.993 - 22.109: 94.8366% ( 9) 00:30:22.676 22.109 - 22.225: 94.9719% ( 13) 00:30:22.676 22.225 - 22.342: 95.0864% ( 11) 00:30:22.676 22.342 - 22.458: 95.1489% ( 6) 00:30:22.676 22.458 - 22.575: 95.2738% ( 12) 00:30:22.676 22.575 - 22.691: 95.3467% ( 7) 00:30:22.676 22.691 - 22.807: 95.4299% ( 8) 00:30:22.676 22.807 - 22.924: 95.5132% ( 8) 00:30:22.676 22.924 - 23.040: 95.5861% ( 7) 00:30:22.676 23.040 - 23.156: 95.6590% ( 7) 00:30:22.676 23.156 - 23.273: 95.7631% ( 10) 00:30:22.676 23.273 - 23.389: 95.8255% ( 6) 00:30:22.676 23.389 - 23.505: 95.9088% ( 8) 00:30:22.676 23.505 - 23.622: 95.9609% ( 5) 00:30:22.676 23.622 - 23.738: 96.0962% ( 13) 00:30:22.676 23.738 - 23.855: 96.1482% ( 5) 00:30:22.676 23.855 - 23.971: 96.2523% ( 10) 00:30:22.676 23.971 - 24.087: 96.3356% ( 8) 00:30:22.676 24.087 - 24.204: 96.4189% ( 8) 00:30:22.676 24.204 - 24.320: 96.5022% ( 8) 00:30:22.676 24.320 - 24.436: 96.5751% ( 7) 00:30:22.676 24.436 - 24.553: 96.6479% ( 7) 00:30:22.676 24.553 - 24.669: 96.7833% ( 13) 00:30:22.676 24.669 - 24.785: 96.8978% ( 11) 00:30:22.676 24.785 - 24.902: 96.9706% ( 7) 00:30:22.676 24.902 - 25.018: 97.0747% ( 10) 00:30:22.676 25.018 - 25.135: 97.1893% ( 11) 00:30:22.676 25.135 - 25.251: 97.2934% ( 10) 00:30:22.676 25.251 - 25.367: 97.3558% ( 6) 00:30:22.676 25.367 - 25.484: 97.4703% ( 11) 00:30:22.676 25.484 - 25.600: 97.6057% ( 13) 00:30:22.676 25.600 - 25.716: 97.6889% ( 8) 00:30:22.676 25.716 - 25.833: 97.7618% ( 7) 00:30:22.676 25.833 - 25.949: 97.8555% ( 9) 00:30:22.676 25.949 - 26.065: 97.9492% ( 9) 00:30:22.676 26.065 - 26.182: 98.0637% ( 11) 00:30:22.676 26.182 - 26.298: 98.1886% ( 12) 00:30:22.676 26.298 - 26.415: 98.2615% ( 7) 00:30:22.676 26.415 - 26.531: 98.3656% ( 10) 00:30:22.676 26.531 - 26.647: 98.4593% ( 9) 00:30:22.676 26.647 - 26.764: 98.5113% ( 5) 00:30:22.676 26.764 - 26.880: 98.5530% ( 4) 00:30:22.676 26.880 - 26.996: 98.6050% ( 5) 00:30:22.676 26.996 - 27.113: 98.6675% ( 6) 00:30:22.676 27.113 - 27.229: 98.7091% ( 4) 00:30:22.676 27.229 - 27.345: 98.7612% ( 5) 00:30:22.676 27.345 - 27.462: 98.8028% ( 4) 00:30:22.676 27.462 - 27.578: 98.8237% ( 2) 00:30:22.676 27.578 - 27.695: 98.8549% ( 3) 00:30:22.676 27.695 - 27.811: 98.8757% ( 2) 00:30:22.676 27.811 - 27.927: 98.9173% ( 4) 00:30:22.676 27.927 - 28.044: 98.9590% ( 4) 00:30:22.676 28.044 - 28.160: 98.9902% ( 3) 00:30:22.676 28.160 - 28.276: 99.0110% ( 2) 00:30:22.676 28.276 - 28.393: 99.0423% ( 3) 00:30:22.676 28.393 - 28.509: 99.0839% ( 4) 00:30:22.676 28.509 - 28.625: 99.1047% ( 2) 00:30:22.676 28.625 - 28.742: 99.1464% ( 4) 00:30:22.676 28.742 - 28.858: 99.1776% ( 3) 00:30:22.676 28.858 - 28.975: 99.2088% ( 3) 00:30:22.676 28.975 - 29.091: 99.2192% ( 1) 00:30:22.676 29.091 - 29.207: 99.2609% ( 4) 00:30:22.676 29.207 - 29.324: 99.2817% ( 2) 00:30:22.676 29.324 - 29.440: 99.3025% ( 2) 00:30:22.676 29.440 - 29.556: 99.3442% ( 4) 00:30:22.676 29.556 - 29.673: 99.3754% ( 3) 00:30:22.676 29.673 - 29.789: 99.3962% ( 2) 00:30:22.676 29.789 - 30.022: 99.4274% ( 3) 00:30:22.676 30.022 - 30.255: 99.4483% ( 2) 00:30:22.676 30.255 - 30.487: 99.4795% ( 3) 00:30:22.676 30.487 - 30.720: 99.4899% ( 1) 00:30:22.676 30.720 - 30.953: 99.5524% ( 6) 00:30:22.676 30.953 - 31.185: 99.5732% ( 2) 00:30:22.676 31.185 - 31.418: 99.5940% ( 2) 00:30:22.676 31.418 - 31.651: 99.6356% ( 4) 00:30:22.676 31.651 - 31.884: 99.6669% ( 3) 00:30:22.676 31.884 - 32.116: 99.6877% ( 2) 00:30:22.676 32.116 - 32.349: 99.7189% ( 3) 00:30:22.676 32.582 - 32.815: 99.7397% ( 2) 00:30:22.676 33.047 - 33.280: 99.7710% ( 3) 00:30:22.676 33.280 - 33.513: 99.7814% ( 1) 00:30:22.676 33.513 - 33.745: 99.7918% ( 1) 00:30:22.676 33.745 - 33.978: 99.8126% ( 2) 00:30:22.676 34.211 - 34.444: 99.8334% ( 2) 00:30:22.676 35.840 - 36.073: 99.8438% ( 1) 00:30:22.676 36.305 - 36.538: 99.8543% ( 1) 00:30:22.676 37.004 - 37.236: 99.8647% ( 1) 00:30:22.676 38.633 - 38.865: 99.8751% ( 1) 00:30:22.676 39.564 - 39.796: 99.8855% ( 1) 00:30:22.676 40.960 - 41.193: 99.8959% ( 1) 00:30:22.676 41.193 - 41.425: 99.9167% ( 2) 00:30:22.676 42.356 - 42.589: 99.9375% ( 2) 00:30:22.676 42.589 - 42.822: 99.9479% ( 1) 00:30:22.676 46.080 - 46.313: 99.9584% ( 1) 00:30:22.676 48.175 - 48.407: 99.9688% ( 1) 00:30:22.676 53.062 - 53.295: 99.9792% ( 1) 00:30:22.676 62.836 - 63.302: 99.9896% ( 1) 00:30:22.676 79.127 - 79.593: 100.0000% ( 1) 00:30:22.676 00:30:22.676 Complete histogram 00:30:22.676 ================== 00:30:22.676 Range in us Cumulative Count 00:30:22.676 9.367 - 9.425: 0.0104% ( 1) 00:30:22.676 9.425 - 9.484: 0.1770% ( 16) 00:30:22.676 9.484 - 9.542: 1.2596% ( 104) 00:30:22.676 9.542 - 9.600: 4.9448% ( 354) 00:30:22.676 9.600 - 9.658: 13.2209% ( 795) 00:30:22.676 9.658 - 9.716: 24.7554% ( 1108) 00:30:22.677 9.716 - 9.775: 36.7270% ( 1150) 00:30:22.677 9.775 - 9.833: 47.2517% ( 1011) 00:30:22.677 9.833 - 9.891: 54.3410% ( 681) 00:30:22.677 9.891 - 9.949: 59.1089% ( 458) 00:30:22.677 9.949 - 10.007: 61.5136% ( 231) 00:30:22.677 10.007 - 10.065: 63.2730% ( 169) 00:30:22.677 10.065 - 10.124: 64.4493% ( 113) 00:30:22.677 10.124 - 10.182: 65.4799% ( 99) 00:30:22.677 10.182 - 10.240: 66.1462% ( 64) 00:30:22.677 10.240 - 10.298: 66.5938% ( 43) 00:30:22.677 10.298 - 10.356: 66.9061% ( 30) 00:30:22.677 10.356 - 10.415: 67.2288% ( 31) 00:30:22.677 10.415 - 10.473: 67.6869% ( 44) 00:30:22.677 10.473 - 10.531: 68.3739% ( 66) 00:30:22.677 10.531 - 10.589: 69.1131% ( 71) 00:30:22.677 10.589 - 10.647: 69.8105% ( 67) 00:30:22.677 10.647 - 10.705: 70.7266% ( 88) 00:30:22.677 10.705 - 10.764: 71.2367% ( 49) 00:30:22.677 10.764 - 10.822: 71.8509% ( 59) 00:30:22.677 10.822 - 10.880: 72.3194% ( 45) 00:30:22.677 10.880 - 10.938: 72.6109% ( 28) 00:30:22.677 10.938 - 10.996: 72.7774% ( 16) 00:30:22.677 10.996 - 11.055: 73.0169% ( 23) 00:30:22.677 11.055 - 11.113: 73.1626% ( 14) 00:30:22.677 11.113 - 11.171: 73.2771% ( 11) 00:30:22.677 11.171 - 11.229: 73.4020% ( 12) 00:30:22.677 11.229 - 11.287: 73.4853% ( 8) 00:30:22.677 11.287 - 11.345: 73.5166% ( 3) 00:30:22.677 11.345 - 11.404: 73.5582% ( 4) 00:30:22.677 11.404 - 11.462: 73.5686% ( 1) 00:30:22.677 11.462 - 11.520: 73.6415% ( 7) 00:30:22.677 11.520 - 11.578: 73.7248% ( 8) 00:30:22.677 11.578 - 11.636: 73.7768% ( 5) 00:30:22.677 11.636 - 11.695: 73.9017% ( 12) 00:30:22.677 11.695 - 11.753: 73.9434% ( 4) 00:30:22.677 11.753 - 11.811: 74.1516% ( 20) 00:30:22.677 11.811 - 11.869: 74.5263% ( 36) 00:30:22.677 11.869 - 11.927: 75.6194% ( 105) 00:30:22.677 11.927 - 11.985: 77.2954% ( 161) 00:30:22.677 11.985 - 12.044: 79.8980% ( 250) 00:30:22.677 12.044 - 12.102: 82.6983% ( 269) 00:30:22.677 12.102 - 12.160: 84.8636% ( 208) 00:30:22.677 12.160 - 12.218: 86.5709% ( 164) 00:30:22.677 12.218 - 12.276: 87.8097% ( 119) 00:30:22.677 12.276 - 12.335: 88.7674% ( 92) 00:30:22.677 12.335 - 12.393: 89.4649% ( 67) 00:30:22.677 12.393 - 12.451: 89.9958% ( 51) 00:30:22.677 12.451 - 12.509: 90.4122% ( 40) 00:30:22.677 12.509 - 12.567: 90.8182% ( 39) 00:30:22.677 12.567 - 12.625: 91.0785% ( 25) 00:30:22.677 12.625 - 12.684: 91.3596% ( 27) 00:30:22.677 12.684 - 12.742: 91.6302% ( 26) 00:30:22.677 12.742 - 12.800: 91.8697% ( 23) 00:30:22.677 12.800 - 12.858: 92.1299% ( 25) 00:30:22.677 12.858 - 12.916: 92.3277% ( 19) 00:30:22.677 12.916 - 12.975: 92.5776% ( 24) 00:30:22.677 12.975 - 13.033: 92.9523% ( 36) 00:30:22.677 13.033 - 13.091: 93.3271% ( 36) 00:30:22.677 13.091 - 13.149: 93.6602% ( 32) 00:30:22.677 13.149 - 13.207: 93.8788% ( 21) 00:30:22.677 13.207 - 13.265: 94.1911% ( 30) 00:30:22.677 13.265 - 13.324: 94.4722% ( 27) 00:30:22.677 13.324 - 13.382: 94.6700% ( 19) 00:30:22.677 13.382 - 13.440: 94.8470% ( 17) 00:30:22.677 13.440 - 13.498: 95.0239% ( 17) 00:30:22.677 13.498 - 13.556: 95.1072% ( 8) 00:30:22.677 13.556 - 13.615: 95.2113% ( 10) 00:30:22.677 13.615 - 13.673: 95.2426% ( 3) 00:30:22.677 13.673 - 13.731: 95.2946% ( 5) 00:30:22.677 13.731 - 13.789: 95.3362% ( 4) 00:30:22.677 13.789 - 13.847: 95.3779% ( 4) 00:30:22.677 13.847 - 13.905: 95.4091% ( 3) 00:30:22.677 13.905 - 13.964: 95.4612% ( 5) 00:30:22.677 13.964 - 14.022: 95.5340% ( 7) 00:30:22.677 14.022 - 14.080: 95.5445% ( 1) 00:30:22.677 14.080 - 14.138: 95.5965% ( 5) 00:30:22.677 14.138 - 14.196: 95.6069% ( 1) 00:30:22.677 14.196 - 14.255: 95.6173% ( 1) 00:30:22.677 14.255 - 14.313: 95.6590% ( 4) 00:30:22.677 14.313 - 14.371: 95.6798% ( 2) 00:30:22.677 14.371 - 14.429: 95.7214% ( 4) 00:30:22.677 14.429 - 14.487: 95.7735% ( 5) 00:30:22.677 14.487 - 14.545: 95.7839% ( 1) 00:30:22.677 14.545 - 14.604: 95.8047% ( 2) 00:30:22.677 14.604 - 14.662: 95.8776% ( 7) 00:30:22.677 14.662 - 14.720: 95.9088% ( 3) 00:30:22.677 14.720 - 14.778: 95.9400% ( 3) 00:30:22.677 14.778 - 14.836: 95.9713% ( 3) 00:30:22.677 14.836 - 14.895: 96.0129% ( 4) 00:30:22.677 14.895 - 15.011: 96.0441% ( 3) 00:30:22.677 15.011 - 15.127: 96.1274% ( 8) 00:30:22.677 15.127 - 15.244: 96.1691% ( 4) 00:30:22.677 15.244 - 15.360: 96.2419% ( 7) 00:30:22.677 15.360 - 15.476: 96.3252% ( 8) 00:30:22.677 15.476 - 15.593: 96.3877% ( 6) 00:30:22.677 15.593 - 15.709: 96.4918% ( 10) 00:30:22.677 15.709 - 15.825: 96.5542% ( 6) 00:30:22.677 15.825 - 15.942: 96.6063% ( 5) 00:30:22.677 15.942 - 16.058: 96.7000% ( 9) 00:30:22.677 16.058 - 16.175: 96.7833% ( 8) 00:30:22.677 16.175 - 16.291: 96.8978% ( 11) 00:30:22.677 16.291 - 16.407: 96.9811% ( 8) 00:30:22.677 16.407 - 16.524: 97.1164% ( 13) 00:30:22.677 16.524 - 16.640: 97.2101% ( 9) 00:30:22.677 16.640 - 16.756: 97.3142% ( 10) 00:30:22.677 16.756 - 16.873: 97.3766% ( 6) 00:30:22.677 16.873 - 16.989: 97.4287% ( 5) 00:30:22.677 16.989 - 17.105: 97.6057% ( 17) 00:30:22.677 17.105 - 17.222: 97.7202% ( 11) 00:30:22.677 17.222 - 17.338: 97.8867% ( 16) 00:30:22.677 17.338 - 17.455: 97.9908% ( 10) 00:30:22.677 17.455 - 17.571: 98.0741% ( 8) 00:30:22.677 17.571 - 17.687: 98.1886% ( 11) 00:30:22.677 17.687 - 17.804: 98.2511% ( 6) 00:30:22.677 17.804 - 17.920: 98.3240% ( 7) 00:30:22.677 17.920 - 18.036: 98.3864% ( 6) 00:30:22.677 18.036 - 18.153: 98.4489% ( 6) 00:30:22.677 18.153 - 18.269: 98.5113% ( 6) 00:30:22.677 18.269 - 18.385: 98.5322% ( 2) 00:30:22.677 18.385 - 18.502: 98.5634% ( 3) 00:30:22.677 18.502 - 18.618: 98.5842% ( 2) 00:30:22.677 18.618 - 18.735: 98.6467% ( 6) 00:30:22.677 18.735 - 18.851: 98.6883% ( 4) 00:30:22.677 18.851 - 18.967: 98.7300% ( 4) 00:30:22.677 18.967 - 19.084: 98.7508% ( 2) 00:30:22.677 19.084 - 19.200: 98.7924% ( 4) 00:30:22.677 19.200 - 19.316: 98.8237% ( 3) 00:30:22.677 19.316 - 19.433: 98.8549% ( 3) 00:30:22.677 19.433 - 19.549: 98.9278% ( 7) 00:30:22.677 19.549 - 19.665: 98.9486% ( 2) 00:30:22.677 19.665 - 19.782: 98.9798% ( 3) 00:30:22.677 19.898 - 20.015: 99.0110% ( 3) 00:30:22.677 20.015 - 20.131: 99.0423% ( 3) 00:30:22.677 20.131 - 20.247: 99.0735% ( 3) 00:30:22.677 20.247 - 20.364: 99.1255% ( 5) 00:30:22.677 20.364 - 20.480: 99.1464% ( 2) 00:30:22.677 20.480 - 20.596: 99.1776% ( 3) 00:30:22.677 20.596 - 20.713: 99.1984% ( 2) 00:30:22.677 20.713 - 20.829: 99.2609% ( 6) 00:30:22.677 20.829 - 20.945: 99.2921% ( 3) 00:30:22.677 20.945 - 21.062: 99.3129% ( 2) 00:30:22.677 21.062 - 21.178: 99.3442% ( 3) 00:30:22.677 21.295 - 21.411: 99.3962% ( 5) 00:30:22.677 21.411 - 21.527: 99.4274% ( 3) 00:30:22.677 21.527 - 21.644: 99.4379% ( 1) 00:30:22.677 21.760 - 21.876: 99.4483% ( 1) 00:30:22.677 21.876 - 21.993: 99.4899% ( 4) 00:30:22.677 21.993 - 22.109: 99.5107% ( 2) 00:30:22.677 22.109 - 22.225: 99.5628% ( 5) 00:30:22.677 22.225 - 22.342: 99.5732% ( 1) 00:30:22.677 22.342 - 22.458: 99.5836% ( 1) 00:30:22.677 22.458 - 22.575: 99.5940% ( 1) 00:30:22.677 22.575 - 22.691: 99.6148% ( 2) 00:30:22.677 22.924 - 23.040: 99.6252% ( 1) 00:30:22.677 23.040 - 23.156: 99.6356% ( 1) 00:30:22.677 23.156 - 23.273: 99.6565% ( 2) 00:30:22.677 23.273 - 23.389: 99.6669% ( 1) 00:30:22.677 23.389 - 23.505: 99.6773% ( 1) 00:30:22.677 23.505 - 23.622: 99.6877% ( 1) 00:30:22.677 23.738 - 23.855: 99.7085% ( 2) 00:30:22.677 23.971 - 24.087: 99.7293% ( 2) 00:30:22.677 24.204 - 24.320: 99.7397% ( 1) 00:30:22.677 24.902 - 25.018: 99.7606% ( 2) 00:30:22.677 25.367 - 25.484: 99.7814% ( 2) 00:30:22.677 25.949 - 26.065: 99.7918% ( 1) 00:30:22.677 26.065 - 26.182: 99.8022% ( 1) 00:30:22.677 26.996 - 27.113: 99.8126% ( 1) 00:30:22.677 27.113 - 27.229: 99.8230% ( 1) 00:30:22.677 27.811 - 27.927: 99.8334% ( 1) 00:30:22.677 28.160 - 28.276: 99.8438% ( 1) 00:30:22.677 28.975 - 29.091: 99.8543% ( 1) 00:30:22.677 30.720 - 30.953: 99.8647% ( 1) 00:30:22.677 31.651 - 31.884: 99.8751% ( 1) 00:30:22.677 32.582 - 32.815: 99.8855% ( 1) 00:30:22.677 33.745 - 33.978: 99.8959% ( 1) 00:30:22.677 34.676 - 34.909: 99.9063% ( 1) 00:30:22.677 34.909 - 35.142: 99.9167% ( 1) 00:30:22.677 39.564 - 39.796: 99.9271% ( 1) 00:30:22.677 40.727 - 40.960: 99.9375% ( 1) 00:30:22.677 42.124 - 42.356: 99.9479% ( 1) 00:30:22.677 42.356 - 42.589: 99.9584% ( 1) 00:30:22.677 43.753 - 43.985: 99.9688% ( 1) 00:30:22.677 75.869 - 76.335: 99.9792% ( 1) 00:30:22.677 96.349 - 96.815: 99.9896% ( 1) 00:30:22.677 187.113 - 188.044: 100.0000% ( 1) 00:30:22.677 00:30:22.677 00:30:22.677 real 0m1.326s 00:30:22.677 user 0m1.112s 00:30:22.677 sys 0m0.160s 00:30:22.677 01:59:31 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:22.677 01:59:31 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:30:22.677 ************************************ 00:30:22.677 END TEST nvme_overhead 00:30:22.677 ************************************ 00:30:22.677 01:59:31 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:22.677 01:59:31 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:30:22.678 01:59:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:22.678 01:59:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:22.678 ************************************ 00:30:22.678 START TEST nvme_arbitration 00:30:22.678 ************************************ 00:30:22.678 01:59:31 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:25.969 Initializing NVMe Controllers 00:30:25.969 Attached to 0000:00:10.0 00:30:25.969 Attached to 0000:00:11.0 00:30:25.969 Attached to 0000:00:13.0 00:30:25.969 Attached to 0000:00:12.0 00:30:25.969 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:30:25.969 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:30:25.969 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:30:25.969 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:30:25.969 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:30:25.969 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:30:25.969 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:30:25.969 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:30:25.969 Initialization complete. Launching workers. 00:30:25.969 Starting thread on core 1 with urgent priority queue 00:30:25.969 Starting thread on core 2 with urgent priority queue 00:30:25.969 Starting thread on core 3 with urgent priority queue 00:30:25.969 Starting thread on core 0 with urgent priority queue 00:30:25.969 QEMU NVMe Ctrl (12340 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:30:25.969 QEMU NVMe Ctrl (12342 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:30:25.969 QEMU NVMe Ctrl (12341 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:30:25.969 QEMU NVMe Ctrl (12342 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:30:25.969 QEMU NVMe Ctrl (12343 ) core 2: 704.00 IO/s 142.05 secs/100000 ios 00:30:25.969 QEMU NVMe Ctrl (12342 ) core 3: 576.00 IO/s 173.61 secs/100000 ios 00:30:25.969 ======================================================== 00:30:25.969 00:30:25.969 00:30:25.969 real 0m3.473s 00:30:25.969 user 0m9.442s 00:30:25.969 sys 0m0.195s 00:30:25.969 01:59:34 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:25.969 01:59:34 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:30:25.969 ************************************ 00:30:25.969 END TEST nvme_arbitration 00:30:25.969 ************************************ 00:30:25.969 01:59:34 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:25.969 01:59:34 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:25.969 01:59:34 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:25.969 01:59:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:25.969 ************************************ 00:30:25.969 START TEST nvme_single_aen 00:30:25.969 ************************************ 00:30:25.969 01:59:34 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:26.536 Asynchronous Event Request test 00:30:26.536 Attached to 0000:00:10.0 00:30:26.536 Attached to 0000:00:11.0 00:30:26.536 Attached to 0000:00:13.0 00:30:26.536 Attached to 0000:00:12.0 00:30:26.536 Reset controller to setup AER completions for this process 00:30:26.536 Registering asynchronous event callbacks... 00:30:26.536 Getting orig temperature thresholds of all controllers 00:30:26.536 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:26.536 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:26.536 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:26.536 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:26.536 Setting all controllers temperature threshold low to trigger AER 00:30:26.536 Waiting for all controllers temperature threshold to be set lower 00:30:26.536 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:26.536 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:30:26.536 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:26.536 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:30:26.536 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:26.536 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:30:26.536 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:26.536 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:30:26.536 Waiting for all controllers to trigger AER and reset threshold 00:30:26.536 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:26.536 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:26.536 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:26.536 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:26.536 Cleaning up... 00:30:26.536 ************************************ 00:30:26.536 END TEST nvme_single_aen 00:30:26.536 ************************************ 00:30:26.536 00:30:26.536 real 0m0.296s 00:30:26.536 user 0m0.100s 00:30:26.536 sys 0m0.141s 00:30:26.537 01:59:35 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:26.537 01:59:35 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:30:26.537 01:59:35 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:30:26.537 01:59:35 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:26.537 01:59:35 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:26.537 01:59:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:26.537 ************************************ 00:30:26.537 START TEST nvme_doorbell_aers 00:30:26.537 ************************************ 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:26.537 01:59:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:30:26.795 [2024-10-15 01:59:35.693919] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:30:36.773 Executing: test_write_invalid_db 00:30:36.773 Waiting for AER completion... 00:30:36.773 Failure: test_write_invalid_db 00:30:36.773 00:30:36.773 Executing: test_invalid_db_write_overflow_sq 00:30:36.773 Waiting for AER completion... 00:30:36.773 Failure: test_invalid_db_write_overflow_sq 00:30:36.773 00:30:36.773 Executing: test_invalid_db_write_overflow_cq 00:30:36.773 Waiting for AER completion... 00:30:36.773 Failure: test_invalid_db_write_overflow_cq 00:30:36.773 00:30:36.773 01:59:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:36.773 01:59:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:30:36.773 [2024-10-15 01:59:45.730845] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:30:46.797 Executing: test_write_invalid_db 00:30:46.797 Waiting for AER completion... 00:30:46.797 Failure: test_write_invalid_db 00:30:46.797 00:30:46.797 Executing: test_invalid_db_write_overflow_sq 00:30:46.797 Waiting for AER completion... 00:30:46.797 Failure: test_invalid_db_write_overflow_sq 00:30:46.797 00:30:46.797 Executing: test_invalid_db_write_overflow_cq 00:30:46.797 Waiting for AER completion... 00:30:46.797 Failure: test_invalid_db_write_overflow_cq 00:30:46.797 00:30:46.797 01:59:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:46.797 01:59:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:30:46.797 [2024-10-15 01:59:55.772320] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:30:56.808 Executing: test_write_invalid_db 00:30:56.808 Waiting for AER completion... 00:30:56.808 Failure: test_write_invalid_db 00:30:56.808 00:30:56.808 Executing: test_invalid_db_write_overflow_sq 00:30:56.808 Waiting for AER completion... 00:30:56.808 Failure: test_invalid_db_write_overflow_sq 00:30:56.808 00:30:56.808 Executing: test_invalid_db_write_overflow_cq 00:30:56.808 Waiting for AER completion... 00:30:56.808 Failure: test_invalid_db_write_overflow_cq 00:30:56.808 00:30:56.808 02:00:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:56.808 02:00:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:30:57.066 [2024-10-15 02:00:05.840679] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 Executing: test_write_invalid_db 00:31:07.036 Waiting for AER completion... 00:31:07.036 Failure: test_write_invalid_db 00:31:07.036 00:31:07.036 Executing: test_invalid_db_write_overflow_sq 00:31:07.036 Waiting for AER completion... 00:31:07.036 Failure: test_invalid_db_write_overflow_sq 00:31:07.036 00:31:07.036 Executing: test_invalid_db_write_overflow_cq 00:31:07.036 Waiting for AER completion... 00:31:07.036 Failure: test_invalid_db_write_overflow_cq 00:31:07.036 00:31:07.036 00:31:07.036 real 0m40.263s 00:31:07.036 user 0m34.168s 00:31:07.036 sys 0m5.740s 00:31:07.036 02:00:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:07.036 ************************************ 00:31:07.036 END TEST nvme_doorbell_aers 00:31:07.036 ************************************ 00:31:07.036 02:00:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:31:07.036 02:00:15 nvme -- nvme/nvme.sh@97 -- # uname 00:31:07.036 02:00:15 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:31:07.036 02:00:15 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:31:07.036 02:00:15 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:31:07.036 02:00:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:07.036 02:00:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:07.036 ************************************ 00:31:07.036 START TEST nvme_multi_aen 00:31:07.036 ************************************ 00:31:07.036 02:00:15 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:31:07.036 [2024-10-15 02:00:15.915325] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 [2024-10-15 02:00:15.915475] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 [2024-10-15 02:00:15.915504] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 [2024-10-15 02:00:15.917468] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 [2024-10-15 02:00:15.917691] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 [2024-10-15 02:00:15.917724] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 [2024-10-15 02:00:15.919333] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 [2024-10-15 02:00:15.919394] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 [2024-10-15 02:00:15.919447] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 [2024-10-15 02:00:15.921031] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 [2024-10-15 02:00:15.921238] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 [2024-10-15 02:00:15.921269] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65206) is not found. Dropping the request. 00:31:07.036 Child process pid: 65727 00:31:07.294 [Child] Asynchronous Event Request test 00:31:07.294 [Child] Attached to 0000:00:10.0 00:31:07.294 [Child] Attached to 0000:00:11.0 00:31:07.294 [Child] Attached to 0000:00:13.0 00:31:07.294 [Child] Attached to 0000:00:12.0 00:31:07.294 [Child] Registering asynchronous event callbacks... 00:31:07.294 [Child] Getting orig temperature thresholds of all controllers 00:31:07.294 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:07.294 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:07.294 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:07.294 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:07.294 [Child] Waiting for all controllers to trigger AER and reset threshold 00:31:07.294 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:07.294 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:07.294 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:07.294 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:07.294 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:07.294 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:07.294 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:07.294 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:07.294 [Child] Cleaning up... 00:31:07.294 Asynchronous Event Request test 00:31:07.295 Attached to 0000:00:10.0 00:31:07.295 Attached to 0000:00:11.0 00:31:07.295 Attached to 0000:00:13.0 00:31:07.295 Attached to 0000:00:12.0 00:31:07.295 Reset controller to setup AER completions for this process 00:31:07.295 Registering asynchronous event callbacks... 00:31:07.295 Getting orig temperature thresholds of all controllers 00:31:07.295 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:07.295 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:07.295 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:07.295 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:07.295 Setting all controllers temperature threshold low to trigger AER 00:31:07.295 Waiting for all controllers temperature threshold to be set lower 00:31:07.295 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:07.295 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:31:07.295 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:07.295 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:31:07.295 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:07.295 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:31:07.295 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:07.295 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:31:07.295 Waiting for all controllers to trigger AER and reset threshold 00:31:07.295 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:07.295 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:07.295 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:07.295 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:07.295 Cleaning up... 00:31:07.295 ************************************ 00:31:07.295 END TEST nvme_multi_aen 00:31:07.295 ************************************ 00:31:07.295 00:31:07.295 real 0m0.670s 00:31:07.295 user 0m0.230s 00:31:07.295 sys 0m0.319s 00:31:07.295 02:00:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:07.295 02:00:16 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:31:07.574 02:00:16 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:07.574 02:00:16 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:07.574 02:00:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:07.574 02:00:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:07.574 ************************************ 00:31:07.574 START TEST nvme_startup 00:31:07.574 ************************************ 00:31:07.574 02:00:16 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:07.846 Initializing NVMe Controllers 00:31:07.846 Attached to 0000:00:10.0 00:31:07.846 Attached to 0000:00:11.0 00:31:07.846 Attached to 0000:00:13.0 00:31:07.846 Attached to 0000:00:12.0 00:31:07.846 Initialization complete. 00:31:07.846 Time used:226511.891 (us). 00:31:07.846 00:31:07.846 real 0m0.322s 00:31:07.846 user 0m0.130s 00:31:07.846 sys 0m0.141s 00:31:07.846 ************************************ 00:31:07.846 END TEST nvme_startup 00:31:07.846 ************************************ 00:31:07.846 02:00:16 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:07.846 02:00:16 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:31:07.846 02:00:16 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:31:07.846 02:00:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:07.846 02:00:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:07.846 02:00:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:07.846 ************************************ 00:31:07.846 START TEST nvme_multi_secondary 00:31:07.846 ************************************ 00:31:07.846 02:00:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:31:07.846 02:00:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65782 00:31:07.846 02:00:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:31:07.846 02:00:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65783 00:31:07.846 02:00:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:31:07.846 02:00:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:11.129 Initializing NVMe Controllers 00:31:11.129 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:11.129 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:11.129 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:11.129 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:11.129 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:31:11.129 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:31:11.129 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:31:11.129 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:31:11.129 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:31:11.129 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:31:11.129 Initialization complete. Launching workers. 00:31:11.129 ======================================================== 00:31:11.129 Latency(us) 00:31:11.129 Device Information : IOPS MiB/s Average min max 00:31:11.129 PCIE (0000:00:10.0) NSID 1 from core 1: 5779.00 22.57 2766.53 825.41 38167.48 00:31:11.129 PCIE (0000:00:11.0) NSID 1 from core 1: 5762.34 22.51 2776.06 856.57 42325.77 00:31:11.129 PCIE (0000:00:13.0) NSID 1 from core 1: 5814.99 22.71 2750.84 732.88 37725.20 00:31:11.129 PCIE (0000:00:12.0) NSID 1 from core 1: 5793.66 22.63 2760.88 761.84 40733.46 00:31:11.129 PCIE (0000:00:12.0) NSID 2 from core 1: 5814.99 22.71 2750.69 961.21 28917.43 00:31:11.129 PCIE (0000:00:12.0) NSID 3 from core 1: 5836.98 22.80 2740.24 899.66 27351.06 00:31:11.129 ======================================================== 00:31:11.129 Total : 34801.96 135.95 2757.49 732.88 42325.77 00:31:11.129 00:31:11.387 Initializing NVMe Controllers 00:31:11.387 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:11.387 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:11.387 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:11.387 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:11.387 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:31:11.387 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:31:11.387 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:31:11.387 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:31:11.387 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:31:11.387 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:31:11.387 Initialization complete. Launching workers. 00:31:11.387 ======================================================== 00:31:11.387 Latency(us) 00:31:11.387 Device Information : IOPS MiB/s Average min max 00:31:11.387 PCIE (0000:00:10.0) NSID 1 from core 2: 2542.47 9.93 6290.70 1069.58 23203.37 00:31:11.387 PCIE (0000:00:11.0) NSID 1 from core 2: 2542.47 9.93 6292.66 919.19 24306.06 00:31:11.387 PCIE (0000:00:13.0) NSID 1 from core 2: 2547.80 9.95 6279.31 983.79 24315.32 00:31:11.387 PCIE (0000:00:12.0) NSID 1 from core 2: 2537.14 9.91 6301.28 1214.01 29317.06 00:31:11.387 PCIE (0000:00:12.0) NSID 2 from core 2: 2537.14 9.91 6297.02 1248.38 29701.47 00:31:11.387 PCIE (0000:00:12.0) NSID 3 from core 2: 2537.14 9.91 6296.36 1086.31 24151.74 00:31:11.387 ======================================================== 00:31:11.387 Total : 15244.16 59.55 6292.88 919.19 29701.47 00:31:11.387 00:31:11.387 02:00:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65782 00:31:13.916 Initializing NVMe Controllers 00:31:13.916 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:13.916 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:13.916 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:13.916 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:13.916 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:13.916 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:13.916 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:13.916 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:13.916 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:13.916 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:13.916 Initialization complete. Launching workers. 00:31:13.916 ======================================================== 00:31:13.916 Latency(us) 00:31:13.916 Device Information : IOPS MiB/s Average min max 00:31:13.916 PCIE (0000:00:10.0) NSID 1 from core 0: 8412.44 32.86 1900.19 878.32 22986.50 00:31:13.916 PCIE (0000:00:11.0) NSID 1 from core 0: 8412.44 32.86 1901.43 924.72 23328.06 00:31:13.916 PCIE (0000:00:13.0) NSID 1 from core 0: 8412.44 32.86 1901.36 947.05 24432.71 00:31:13.916 PCIE (0000:00:12.0) NSID 1 from core 0: 8412.44 32.86 1901.30 917.31 25206.09 00:31:13.916 PCIE (0000:00:12.0) NSID 2 from core 0: 8409.24 32.85 1901.96 908.23 25193.84 00:31:13.916 PCIE (0000:00:12.0) NSID 3 from core 0: 8406.04 32.84 1902.62 907.34 23867.29 00:31:13.916 ======================================================== 00:31:13.916 Total : 50465.04 197.13 1901.48 878.32 25206.09 00:31:13.916 00:31:13.916 02:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65783 00:31:13.916 02:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65860 00:31:13.916 02:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:31:13.916 02:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65861 00:31:13.916 02:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:31:13.916 02:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:17.200 Initializing NVMe Controllers 00:31:17.200 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:17.200 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:17.200 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:17.200 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:17.200 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:17.200 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:17.200 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:17.200 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:17.200 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:17.200 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:17.200 Initialization complete. Launching workers. 00:31:17.200 ======================================================== 00:31:17.200 Latency(us) 00:31:17.200 Device Information : IOPS MiB/s Average min max 00:31:17.200 PCIE (0000:00:10.0) NSID 1 from core 0: 5378.99 21.01 2972.56 970.90 6981.59 00:31:17.200 PCIE (0000:00:11.0) NSID 1 from core 0: 5378.99 21.01 2974.10 1003.29 6909.03 00:31:17.200 PCIE (0000:00:13.0) NSID 1 from core 0: 5378.99 21.01 2974.15 988.40 6552.97 00:31:17.200 PCIE (0000:00:12.0) NSID 1 from core 0: 5384.32 21.03 2971.24 962.96 6635.13 00:31:17.200 PCIE (0000:00:12.0) NSID 2 from core 0: 5384.32 21.03 2971.24 975.46 6918.38 00:31:17.200 PCIE (0000:00:12.0) NSID 3 from core 0: 5384.32 21.03 2971.17 957.80 6421.86 00:31:17.200 ======================================================== 00:31:17.200 Total : 32289.92 126.13 2972.41 957.80 6981.59 00:31:17.200 00:31:17.200 Initializing NVMe Controllers 00:31:17.200 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:17.200 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:17.200 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:17.200 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:17.200 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:31:17.200 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:31:17.200 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:31:17.200 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:31:17.200 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:31:17.200 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:31:17.200 Initialization complete. Launching workers. 00:31:17.200 ======================================================== 00:31:17.200 Latency(us) 00:31:17.200 Device Information : IOPS MiB/s Average min max 00:31:17.200 PCIE (0000:00:10.0) NSID 1 from core 1: 5261.93 20.55 3038.69 1055.82 7005.75 00:31:17.200 PCIE (0000:00:11.0) NSID 1 from core 1: 5261.93 20.55 3040.00 1073.21 6814.54 00:31:17.200 PCIE (0000:00:13.0) NSID 1 from core 1: 5261.93 20.55 3039.89 1079.85 6330.61 00:31:17.200 PCIE (0000:00:12.0) NSID 1 from core 1: 5261.93 20.55 3039.72 1044.82 7139.14 00:31:17.200 PCIE (0000:00:12.0) NSID 2 from core 1: 5261.93 20.55 3039.53 1015.71 7458.31 00:31:17.200 PCIE (0000:00:12.0) NSID 3 from core 1: 5261.93 20.55 3039.42 992.76 7047.45 00:31:17.200 ======================================================== 00:31:17.200 Total : 31571.57 123.33 3039.54 992.76 7458.31 00:31:17.200 00:31:19.141 Initializing NVMe Controllers 00:31:19.141 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:19.141 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:19.141 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:19.141 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:19.141 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:31:19.141 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:31:19.141 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:31:19.141 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:31:19.141 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:31:19.141 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:31:19.141 Initialization complete. Launching workers. 00:31:19.141 ======================================================== 00:31:19.141 Latency(us) 00:31:19.141 Device Information : IOPS MiB/s Average min max 00:31:19.141 PCIE (0000:00:10.0) NSID 1 from core 2: 3636.03 14.20 4396.26 964.16 16434.39 00:31:19.141 PCIE (0000:00:11.0) NSID 1 from core 2: 3636.03 14.20 4396.13 960.35 16033.44 00:31:19.141 PCIE (0000:00:13.0) NSID 1 from core 2: 3636.03 14.20 4396.27 994.76 14597.45 00:31:19.141 PCIE (0000:00:12.0) NSID 1 from core 2: 3636.03 14.20 4395.74 998.83 14945.26 00:31:19.141 PCIE (0000:00:12.0) NSID 2 from core 2: 3636.03 14.20 4395.90 996.18 15044.83 00:31:19.141 PCIE (0000:00:12.0) NSID 3 from core 2: 3636.03 14.20 4395.71 933.74 14476.25 00:31:19.141 ======================================================== 00:31:19.141 Total : 21816.17 85.22 4396.00 933.74 16434.39 00:31:19.141 00:31:19.141 ************************************ 00:31:19.141 END TEST nvme_multi_secondary 00:31:19.141 ************************************ 00:31:19.141 02:00:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65860 00:31:19.141 02:00:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65861 00:31:19.141 00:31:19.141 real 0m11.388s 00:31:19.141 user 0m18.554s 00:31:19.141 sys 0m1.108s 00:31:19.141 02:00:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:19.141 02:00:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:31:19.399 02:00:28 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:31:19.399 02:00:28 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:31:19.399 02:00:28 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/64785 ]] 00:31:19.399 02:00:28 nvme -- common/autotest_common.sh@1090 -- # kill 64785 00:31:19.399 02:00:28 nvme -- common/autotest_common.sh@1091 -- # wait 64785 00:31:19.399 [2024-10-15 02:00:28.170501] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.170617] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.170680] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.170718] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.174904] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.174992] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.175031] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.175066] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.178748] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.178835] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.178871] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.178905] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.182056] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.182317] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.182350] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.399 [2024-10-15 02:00:28.182374] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65725) is not found. Dropping the request. 00:31:19.658 [2024-10-15 02:00:28.493720] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:31:19.658 02:00:28 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:31:19.658 02:00:28 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:31:19.658 02:00:28 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:19.658 02:00:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:19.658 02:00:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:19.658 02:00:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:19.658 ************************************ 00:31:19.658 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:19.658 ************************************ 00:31:19.658 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:19.658 * Looking for test storage... 00:31:19.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:19.658 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:19.658 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:31:19.658 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:19.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.917 --rc genhtml_branch_coverage=1 00:31:19.917 --rc genhtml_function_coverage=1 00:31:19.917 --rc genhtml_legend=1 00:31:19.917 --rc geninfo_all_blocks=1 00:31:19.917 --rc geninfo_unexecuted_blocks=1 00:31:19.917 00:31:19.917 ' 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:19.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.917 --rc genhtml_branch_coverage=1 00:31:19.917 --rc genhtml_function_coverage=1 00:31:19.917 --rc genhtml_legend=1 00:31:19.917 --rc geninfo_all_blocks=1 00:31:19.917 --rc geninfo_unexecuted_blocks=1 00:31:19.917 00:31:19.917 ' 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:19.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.917 --rc genhtml_branch_coverage=1 00:31:19.917 --rc genhtml_function_coverage=1 00:31:19.917 --rc genhtml_legend=1 00:31:19.917 --rc geninfo_all_blocks=1 00:31:19.917 --rc geninfo_unexecuted_blocks=1 00:31:19.917 00:31:19.917 ' 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:19.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.917 --rc genhtml_branch_coverage=1 00:31:19.917 --rc genhtml_function_coverage=1 00:31:19.917 --rc genhtml_legend=1 00:31:19.917 --rc geninfo_all_blocks=1 00:31:19.917 --rc geninfo_unexecuted_blocks=1 00:31:19.917 00:31:19.917 ' 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:31:19.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66023 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66023 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 66023 ']' 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:19.917 02:00:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:20.176 [2024-10-15 02:00:28.929307] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:31:20.176 [2024-10-15 02:00:28.929693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66023 ] 00:31:20.176 [2024-10-15 02:00:29.133304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.435 [2024-10-15 02:00:29.434773] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.435 [2024-10-15 02:00:29.435108] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.435 [2024-10-15 02:00:29.435243] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.435 [2024-10-15 02:00:29.435265] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:21.811 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:21.811 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:31:21.811 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:31:21.811 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.811 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:21.811 [2024-10-15 02:00:30.457264] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200036a18da0 was disconnected and freed. delete nvme_qpair. 00:31:21.811 nvme0n1 00:31:21.811 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_rGM3x.txt 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:21.812 true 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728957630 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66057 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:21.812 02:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:23.713 [2024-10-15 02:00:32.489999] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:31:23.713 [2024-10-15 02:00:32.490378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:23.713 [2024-10-15 02:00:32.490432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:23.713 [2024-10-15 02:00:32.490457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.713 [2024-10-15 02:00:32.492579] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0] Resetting controller successful. 00:31:23.713 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66057 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66057 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66057 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_rGM3x.txt 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_rGM3x.txt 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66023 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 66023 ']' 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 66023 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66023 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:23.713 killing process with pid 66023 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66023' 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 66023 00:31:23.713 02:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 66023 00:31:26.242 02:00:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:26.242 02:00:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:26.242 00:31:26.242 real 0m6.585s 00:31:26.242 user 0m22.191s 00:31:26.242 sys 0m0.855s 00:31:26.242 02:00:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:26.242 02:00:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:26.242 ************************************ 00:31:26.242 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:26.242 ************************************ 00:31:26.242 02:00:35 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:26.242 02:00:35 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:26.242 02:00:35 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:26.242 02:00:35 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:26.242 02:00:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:26.242 ************************************ 00:31:26.242 START TEST nvme_fio 00:31:26.242 ************************************ 00:31:26.242 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:31:26.242 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:26.242 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:26.242 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:26.242 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:26.242 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:31:26.242 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:26.242 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:26.242 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:26.242 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:31:26.242 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:26.242 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:31:26.242 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:26.242 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:26.242 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:31:26.242 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:26.501 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:31:26.501 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:27.069 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:27.069 02:00:35 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:27.069 02:00:35 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:27.069 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:27.069 fio-3.35 00:31:27.069 Starting 1 thread 00:31:30.354 00:31:30.354 test: (groupid=0, jobs=1): err= 0: pid=66204: Tue Oct 15 02:00:39 2024 00:31:30.354 read: IOPS=17.2k, BW=67.2MiB/s (70.4MB/s)(134MiB/2001msec) 00:31:30.354 slat (nsec): min=4680, max=66340, avg=6149.88, stdev=1880.99 00:31:30.354 clat (usec): min=319, max=9561, avg=3701.50, stdev=481.07 00:31:30.354 lat (usec): min=326, max=9627, avg=3707.65, stdev=481.77 00:31:30.354 clat percentiles (usec): 00:31:30.354 | 1.00th=[ 3228], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:31:30.354 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3589], 00:31:30.354 | 70.00th=[ 3654], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4424], 00:31:30.354 | 99.00th=[ 5473], 99.50th=[ 6521], 99.90th=[ 7832], 99.95th=[ 8094], 00:31:30.354 | 99.99th=[ 9372] 00:31:30.354 bw ( KiB/s): min=60736, max=73264, per=100.00%, avg=69058.67, stdev=7207.78, samples=3 00:31:30.354 iops : min=15184, max=18316, avg=17264.67, stdev=1801.94, samples=3 00:31:30.354 write: IOPS=17.2k, BW=67.3MiB/s (70.5MB/s)(135MiB/2001msec); 0 zone resets 00:31:30.354 slat (nsec): min=4747, max=63121, avg=6283.24, stdev=1923.32 00:31:30.354 clat (usec): min=350, max=9450, avg=3708.11, stdev=489.17 00:31:30.354 lat (usec): min=357, max=9462, avg=3714.40, stdev=489.86 00:31:30.354 clat percentiles (usec): 00:31:30.354 | 1.00th=[ 3261], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:31:30.354 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3589], 00:31:30.354 | 70.00th=[ 3654], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4490], 00:31:30.354 | 99.00th=[ 5473], 99.50th=[ 6652], 99.90th=[ 7898], 99.95th=[ 8225], 00:31:30.354 | 99.99th=[ 9110] 00:31:30.354 bw ( KiB/s): min=61040, max=72920, per=100.00%, avg=68952.00, stdev=6852.00, samples=3 00:31:30.354 iops : min=15260, max=18230, avg=17238.00, stdev=1713.00, samples=3 00:31:30.354 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:30.354 lat (msec) : 2=0.04%, 4=77.47%, 10=22.46% 00:31:30.354 cpu : usr=99.05%, sys=0.15%, ctx=2, majf=0, minf=607 00:31:30.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:30.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.354 issued rwts: total=34402,34450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.354 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.354 00:31:30.354 Run status group 0 (all jobs): 00:31:30.354 READ: bw=67.2MiB/s (70.4MB/s), 67.2MiB/s-67.2MiB/s (70.4MB/s-70.4MB/s), io=134MiB (141MB), run=2001-2001msec 00:31:30.354 WRITE: bw=67.3MiB/s (70.5MB/s), 67.3MiB/s-67.3MiB/s (70.5MB/s-70.5MB/s), io=135MiB (141MB), run=2001-2001msec 00:31:30.613 ----------------------------------------------------- 00:31:30.613 Suppressions used: 00:31:30.613 count bytes template 00:31:30.613 1 32 /usr/src/fio/parse.c 00:31:30.613 1 8 libtcmalloc_minimal.so 00:31:30.613 ----------------------------------------------------- 00:31:30.613 00:31:30.613 02:00:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:30.613 02:00:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:30.613 02:00:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:31:30.613 02:00:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:30.872 02:00:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:31:30.872 02:00:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:31.131 02:00:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:31.131 02:00:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:31.131 02:00:40 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:31.390 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:31.390 fio-3.35 00:31:31.390 Starting 1 thread 00:31:34.675 00:31:34.675 test: (groupid=0, jobs=1): err= 0: pid=66270: Tue Oct 15 02:00:43 2024 00:31:34.675 read: IOPS=16.5k, BW=64.4MiB/s (67.5MB/s)(129MiB/2001msec) 00:31:34.675 slat (nsec): min=4628, max=74099, avg=6360.81, stdev=1987.10 00:31:34.675 clat (usec): min=233, max=9182, avg=3858.86, stdev=543.23 00:31:34.675 lat (usec): min=239, max=9256, avg=3865.22, stdev=543.98 00:31:34.675 clat percentiles (usec): 00:31:34.675 | 1.00th=[ 3097], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3523], 00:31:34.675 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3785], 00:31:34.675 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555], 00:31:34.675 | 99.00th=[ 6194], 99.50th=[ 7242], 99.90th=[ 7832], 99.95th=[ 8029], 00:31:34.675 | 99.99th=[ 8979] 00:31:34.675 bw ( KiB/s): min=60344, max=68960, per=98.68%, avg=65048.00, stdev=4362.26, samples=3 00:31:34.675 iops : min=15086, max=17240, avg=16262.00, stdev=1090.56, samples=3 00:31:34.675 write: IOPS=16.5k, BW=64.5MiB/s (67.6MB/s)(129MiB/2001msec); 0 zone resets 00:31:34.675 slat (nsec): min=4763, max=60163, avg=6528.73, stdev=2170.14 00:31:34.675 clat (usec): min=264, max=9038, avg=3867.78, stdev=549.56 00:31:34.675 lat (usec): min=270, max=9050, avg=3874.31, stdev=550.35 00:31:34.675 clat percentiles (usec): 00:31:34.675 | 1.00th=[ 3064], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3523], 00:31:34.675 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3785], 00:31:34.675 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555], 00:31:34.675 | 99.00th=[ 6390], 99.50th=[ 7373], 99.90th=[ 7898], 99.95th=[ 8094], 00:31:34.675 | 99.99th=[ 8848] 00:31:34.675 bw ( KiB/s): min=60760, max=68248, per=98.15%, avg=64834.67, stdev=3787.55, samples=3 00:31:34.675 iops : min=15190, max=17062, avg=16208.67, stdev=946.89, samples=3 00:31:34.675 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:31:34.675 lat (msec) : 2=0.09%, 4=66.91%, 10=32.95% 00:31:34.675 cpu : usr=98.75%, sys=0.30%, ctx=3, majf=0, minf=608 00:31:34.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:34.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:34.675 issued rwts: total=32975,33045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:34.675 00:31:34.675 Run status group 0 (all jobs): 00:31:34.675 READ: bw=64.4MiB/s (67.5MB/s), 64.4MiB/s-64.4MiB/s (67.5MB/s-67.5MB/s), io=129MiB (135MB), run=2001-2001msec 00:31:34.675 WRITE: bw=64.5MiB/s (67.6MB/s), 64.5MiB/s-64.5MiB/s (67.6MB/s-67.6MB/s), io=129MiB (135MB), run=2001-2001msec 00:31:34.675 ----------------------------------------------------- 00:31:34.675 Suppressions used: 00:31:34.675 count bytes template 00:31:34.675 1 32 /usr/src/fio/parse.c 00:31:34.675 1 8 libtcmalloc_minimal.so 00:31:34.675 ----------------------------------------------------- 00:31:34.675 00:31:34.675 02:00:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:34.675 02:00:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:34.675 02:00:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:31:34.676 02:00:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:35.243 02:00:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:31:35.243 02:00:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:35.503 02:00:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:35.503 02:00:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:35.503 02:00:44 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:31:35.503 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:35.503 fio-3.35 00:31:35.503 Starting 1 thread 00:31:38.812 00:31:38.812 test: (groupid=0, jobs=1): err= 0: pid=66331: Tue Oct 15 02:00:47 2024 00:31:38.812 read: IOPS=15.5k, BW=60.4MiB/s (63.3MB/s)(121MiB/2001msec) 00:31:38.812 slat (usec): min=4, max=214, avg= 6.87, stdev= 2.56 00:31:38.812 clat (usec): min=240, max=13229, avg=4123.19, stdev=587.58 00:31:38.812 lat (usec): min=246, max=13278, avg=4130.06, stdev=588.29 00:31:38.812 clat percentiles (usec): 00:31:38.812 | 1.00th=[ 3359], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3687], 00:31:38.812 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4178], 60.00th=[ 4293], 00:31:38.812 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4621], 00:31:38.812 | 99.00th=[ 6783], 99.50th=[ 7701], 99.90th=[ 8717], 99.95th=[11207], 00:31:38.812 | 99.99th=[13042] 00:31:38.812 bw ( KiB/s): min=58128, max=64640, per=99.44%, avg=61480.00, stdev=3260.24, samples=3 00:31:38.812 iops : min=14532, max=16160, avg=15370.00, stdev=815.06, samples=3 00:31:38.812 write: IOPS=15.5k, BW=60.4MiB/s (63.4MB/s)(121MiB/2001msec); 0 zone resets 00:31:38.812 slat (nsec): min=4821, max=62242, avg=6997.14, stdev=2309.61 00:31:38.812 clat (usec): min=273, max=13145, avg=4125.01, stdev=589.42 00:31:38.812 lat (usec): min=278, max=13162, avg=4132.00, stdev=590.12 00:31:38.812 clat percentiles (usec): 00:31:38.812 | 1.00th=[ 3359], 5.00th=[ 3523], 10.00th=[ 3621], 20.00th=[ 3687], 00:31:38.812 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4178], 60.00th=[ 4293], 00:31:38.812 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4621], 00:31:38.812 | 99.00th=[ 6783], 99.50th=[ 7701], 99.90th=[ 9503], 99.95th=[11338], 00:31:38.812 | 99.99th=[12780] 00:31:38.812 bw ( KiB/s): min=58416, max=63384, per=98.65%, avg=61037.33, stdev=2495.36, samples=3 00:31:38.813 iops : min=14604, max=15846, avg=15259.33, stdev=623.84, samples=3 00:31:38.813 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:31:38.813 lat (msec) : 2=0.06%, 4=43.41%, 10=56.40%, 20=0.08% 00:31:38.813 cpu : usr=98.60%, sys=0.20%, ctx=29, majf=0, minf=608 00:31:38.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:38.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.813 issued rwts: total=30928,30951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.813 00:31:38.813 Run status group 0 (all jobs): 00:31:38.813 READ: bw=60.4MiB/s (63.3MB/s), 60.4MiB/s-60.4MiB/s (63.3MB/s-63.3MB/s), io=121MiB (127MB), run=2001-2001msec 00:31:38.813 WRITE: bw=60.4MiB/s (63.4MB/s), 60.4MiB/s-60.4MiB/s (63.4MB/s-63.4MB/s), io=121MiB (127MB), run=2001-2001msec 00:31:39.078 ----------------------------------------------------- 00:31:39.078 Suppressions used: 00:31:39.078 count bytes template 00:31:39.078 1 32 /usr/src/fio/parse.c 00:31:39.078 1 8 libtcmalloc_minimal.so 00:31:39.078 ----------------------------------------------------- 00:31:39.078 00:31:39.078 02:00:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:39.078 02:00:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:39.078 02:00:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:31:39.078 02:00:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:39.345 02:00:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:31:39.345 02:00:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:39.603 02:00:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:39.603 02:00:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:31:39.603 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:31:39.603 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:39.603 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:39.603 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:39.603 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:39.603 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:31:39.603 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:39.603 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.603 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:39.603 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:39.603 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:31:39.861 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:39.861 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:39.861 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:31:39.861 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:39.861 02:00:48 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:31:39.861 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:39.861 fio-3.35 00:31:39.861 Starting 1 thread 00:31:45.128 00:31:45.128 test: (groupid=0, jobs=1): err= 0: pid=66397: Tue Oct 15 02:00:53 2024 00:31:45.128 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(116MiB/2015msec) 00:31:45.128 slat (nsec): min=4631, max=63059, avg=6008.16, stdev=1757.05 00:31:45.128 clat (usec): min=654, max=22005, avg=3285.27, stdev=1146.84 00:31:45.128 lat (usec): min=660, max=22011, avg=3291.27, stdev=1147.16 00:31:45.128 clat percentiles (usec): 00:31:45.128 | 1.00th=[ 1680], 5.00th=[ 1811], 10.00th=[ 1909], 20.00th=[ 2180], 00:31:45.128 | 30.00th=[ 2638], 40.00th=[ 3359], 50.00th=[ 3490], 60.00th=[ 3556], 00:31:45.128 | 70.00th=[ 3621], 80.00th=[ 3720], 90.00th=[ 4228], 95.00th=[ 5276], 00:31:45.128 | 99.00th=[ 6915], 99.50th=[ 8094], 99.90th=[14746], 99.95th=[17433], 00:31:45.128 | 99.99th=[19530] 00:31:45.128 bw ( KiB/s): min=41608, max=67464, per=100.00%, avg=59095.00, stdev=11937.24, samples=4 00:31:45.128 iops : min=10402, max=16866, avg=14773.75, stdev=2984.31, samples=4 00:31:45.128 write: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(116MiB/2015msec); 0 zone resets 00:31:45.128 slat (nsec): min=4708, max=65097, avg=6147.88, stdev=1820.11 00:31:45.128 clat (usec): min=863, max=32997, avg=5390.90, stdev=4432.83 00:31:45.128 lat (usec): min=869, max=33003, avg=5397.04, stdev=4432.82 00:31:45.128 clat percentiles (usec): 00:31:45.128 | 1.00th=[ 1729], 5.00th=[ 1893], 10.00th=[ 2114], 20.00th=[ 2900], 00:31:45.128 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3654], 00:31:45.128 | 70.00th=[ 3785], 80.00th=[ 9634], 90.00th=[11731], 95.00th=[15139], 00:31:45.128 | 99.00th=[21103], 99.50th=[22938], 99.90th=[26346], 99.95th=[29492], 00:31:45.128 | 99.99th=[32375] 00:31:45.128 bw ( KiB/s): min=42632, max=66760, per=100.00%, avg=59056.75, stdev=11303.29, samples=4 00:31:45.128 iops : min=10658, max=16690, avg=14764.00, stdev=2825.66, samples=4 00:31:45.128 lat (usec) : 750=0.01%, 1000=0.02% 00:31:45.128 lat (msec) : 2=10.49%, 4=70.44%, 10=9.27%, 20=8.94%, 50=0.83% 00:31:45.128 cpu : usr=99.06%, sys=0.15%, ctx=5, majf=0, minf=605 00:31:45.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:45.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:45.128 issued rwts: total=29594,29642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.128 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:45.128 00:31:45.128 Run status group 0 (all jobs): 00:31:45.128 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=116MiB (121MB), run=2015-2015msec 00:31:45.128 WRITE: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=116MiB (121MB), run=2015-2015msec 00:31:45.128 ----------------------------------------------------- 00:31:45.128 Suppressions used: 00:31:45.128 count bytes template 00:31:45.128 1 32 /usr/src/fio/parse.c 00:31:45.128 1 8 libtcmalloc_minimal.so 00:31:45.128 ----------------------------------------------------- 00:31:45.128 00:31:45.128 ************************************ 00:31:45.128 END TEST nvme_fio 00:31:45.128 ************************************ 00:31:45.128 02:00:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:45.128 02:00:54 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:31:45.128 00:31:45.128 real 0m18.877s 00:31:45.128 user 0m15.296s 00:31:45.128 sys 0m1.981s 00:31:45.128 02:00:54 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:45.128 02:00:54 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:31:45.128 ************************************ 00:31:45.128 END TEST nvme 00:31:45.128 ************************************ 00:31:45.128 00:31:45.128 real 1m34.514s 00:31:45.128 user 3m49.915s 00:31:45.128 sys 0m15.341s 00:31:45.128 02:00:54 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:45.128 02:00:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:45.128 02:00:54 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:31:45.128 02:00:54 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:45.128 02:00:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:45.128 02:00:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:45.128 02:00:54 -- common/autotest_common.sh@10 -- # set +x 00:31:45.128 ************************************ 00:31:45.128 START TEST nvme_scc 00:31:45.128 ************************************ 00:31:45.128 02:00:54 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:45.387 * Looking for test storage... 00:31:45.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:45.387 02:00:54 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:45.387 02:00:54 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:31:45.387 02:00:54 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:45.387 02:00:54 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@345 -- # : 1 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.387 02:00:54 nvme_scc -- scripts/common.sh@368 -- # return 0 00:31:45.387 02:00:54 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.387 02:00:54 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:45.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.387 --rc genhtml_branch_coverage=1 00:31:45.387 --rc genhtml_function_coverage=1 00:31:45.387 --rc genhtml_legend=1 00:31:45.387 --rc geninfo_all_blocks=1 00:31:45.387 --rc geninfo_unexecuted_blocks=1 00:31:45.387 00:31:45.387 ' 00:31:45.387 02:00:54 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:45.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.387 --rc genhtml_branch_coverage=1 00:31:45.387 --rc genhtml_function_coverage=1 00:31:45.387 --rc genhtml_legend=1 00:31:45.387 --rc geninfo_all_blocks=1 00:31:45.387 --rc geninfo_unexecuted_blocks=1 00:31:45.387 00:31:45.387 ' 00:31:45.387 02:00:54 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:45.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.387 --rc genhtml_branch_coverage=1 00:31:45.387 --rc genhtml_function_coverage=1 00:31:45.387 --rc genhtml_legend=1 00:31:45.387 --rc geninfo_all_blocks=1 00:31:45.387 --rc geninfo_unexecuted_blocks=1 00:31:45.387 00:31:45.387 ' 00:31:45.388 02:00:54 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:45.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.388 --rc genhtml_branch_coverage=1 00:31:45.388 --rc genhtml_function_coverage=1 00:31:45.388 --rc genhtml_legend=1 00:31:45.388 --rc geninfo_all_blocks=1 00:31:45.388 --rc geninfo_unexecuted_blocks=1 00:31:45.388 00:31:45.388 ' 00:31:45.388 02:00:54 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:45.388 02:00:54 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.388 02:00:54 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.388 02:00:54 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.388 02:00:54 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.388 02:00:54 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.388 02:00:54 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.388 02:00:54 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.388 02:00:54 nvme_scc -- paths/export.sh@5 -- # export PATH 00:31:45.388 02:00:54 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:31:45.388 02:00:54 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:31:45.388 02:00:54 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:45.388 02:00:54 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:31:45.388 02:00:54 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:31:45.388 02:00:54 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:31:45.388 02:00:54 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:45.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:45.954 Waiting for block devices as requested 00:31:45.954 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:46.213 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:46.213 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:46.213 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:51.485 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:51.485 02:01:00 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:31:51.485 02:01:00 nvme_scc -- scripts/common.sh@18 -- # local i 00:31:51.485 02:01:00 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:31:51.485 02:01:00 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:31:51.485 02:01:00 nvme_scc -- scripts/common.sh@27 -- # return 0 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.485 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:31:51.486 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:31:51.487 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.488 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:31:51.489 02:01:00 nvme_scc -- scripts/common.sh@18 -- # local i 00:31:51.489 02:01:00 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:31:51.489 02:01:00 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:31:51.489 02:01:00 nvme_scc -- scripts/common.sh@27 -- # return 0 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.489 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.490 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:31:51.491 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.492 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.493 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:31:51.494 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:31:51.494 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:31:51.494 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.494 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.494 02:01:00 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:31:51.494 02:01:00 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:31:51.757 02:01:00 nvme_scc -- scripts/common.sh@18 -- # local i 00:31:51.757 02:01:00 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:31:51.757 02:01:00 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:31:51.757 02:01:00 nvme_scc -- scripts/common.sh@27 -- # return 0 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:31:51.757 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.758 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.759 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:31:51.760 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.761 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:31:51.762 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.763 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:51.764 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:31:51.765 02:01:00 nvme_scc -- scripts/common.sh@18 -- # local i 00:31:51.765 02:01:00 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:31:51.765 02:01:00 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:31:51.765 02:01:00 nvme_scc -- scripts/common.sh@27 -- # return 0 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:31:51.765 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:31:51.766 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:31:51.767 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:31:51.768 02:01:00 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:31:51.768 02:01:00 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:31:51.768 02:01:00 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:31:51.768 02:01:00 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:31:51.768 02:01:00 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:52.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:52.902 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:52.902 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:31:52.902 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:53.162 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:31:53.162 02:01:01 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:31:53.162 02:01:01 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:53.162 02:01:01 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:53.162 02:01:01 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:31:53.162 ************************************ 00:31:53.162 START TEST nvme_simple_copy 00:31:53.162 ************************************ 00:31:53.162 02:01:02 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:31:53.421 Initializing NVMe Controllers 00:31:53.421 Attaching to 0000:00:10.0 00:31:53.421 Controller supports SCC. Attached to 0000:00:10.0 00:31:53.421 Namespace ID: 1 size: 6GB 00:31:53.421 Initialization complete. 00:31:53.421 00:31:53.421 Controller QEMU NVMe Ctrl (12340 ) 00:31:53.421 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:31:53.421 Namespace Block Size:4096 00:31:53.421 Writing LBAs 0 to 63 with Random Data 00:31:53.421 Copied LBAs from 0 - 63 to the Destination LBA 256 00:31:53.421 LBAs matching Written Data: 64 00:31:53.421 00:31:53.421 real 0m0.317s 00:31:53.421 user 0m0.125s 00:31:53.421 sys 0m0.090s 00:31:53.421 02:01:02 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:53.421 ************************************ 00:31:53.421 END TEST nvme_simple_copy 00:31:53.421 02:01:02 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:31:53.421 ************************************ 00:31:53.421 ************************************ 00:31:53.421 END TEST nvme_scc 00:31:53.421 ************************************ 00:31:53.421 00:31:53.421 real 0m8.229s 00:31:53.421 user 0m1.436s 00:31:53.421 sys 0m1.769s 00:31:53.421 02:01:02 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:53.421 02:01:02 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:31:53.421 02:01:02 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:31:53.421 02:01:02 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:31:53.421 02:01:02 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:31:53.421 02:01:02 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:31:53.421 02:01:02 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:31:53.421 02:01:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:53.421 02:01:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:53.421 02:01:02 -- common/autotest_common.sh@10 -- # set +x 00:31:53.421 ************************************ 00:31:53.421 START TEST nvme_fdp 00:31:53.421 ************************************ 00:31:53.421 02:01:02 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:31:53.680 * Looking for test storage... 00:31:53.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:53.680 02:01:02 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:53.680 02:01:02 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:31:53.680 02:01:02 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:53.680 02:01:02 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:31:53.680 02:01:02 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:31:53.681 02:01:02 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.681 02:01:02 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:53.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.681 --rc genhtml_branch_coverage=1 00:31:53.681 --rc genhtml_function_coverage=1 00:31:53.681 --rc genhtml_legend=1 00:31:53.681 --rc geninfo_all_blocks=1 00:31:53.681 --rc geninfo_unexecuted_blocks=1 00:31:53.681 00:31:53.681 ' 00:31:53.681 02:01:02 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:53.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.681 --rc genhtml_branch_coverage=1 00:31:53.681 --rc genhtml_function_coverage=1 00:31:53.681 --rc genhtml_legend=1 00:31:53.681 --rc geninfo_all_blocks=1 00:31:53.681 --rc geninfo_unexecuted_blocks=1 00:31:53.681 00:31:53.681 ' 00:31:53.681 02:01:02 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:53.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.681 --rc genhtml_branch_coverage=1 00:31:53.681 --rc genhtml_function_coverage=1 00:31:53.681 --rc genhtml_legend=1 00:31:53.681 --rc geninfo_all_blocks=1 00:31:53.681 --rc geninfo_unexecuted_blocks=1 00:31:53.681 00:31:53.681 ' 00:31:53.681 02:01:02 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:53.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.681 --rc genhtml_branch_coverage=1 00:31:53.681 --rc genhtml_function_coverage=1 00:31:53.681 --rc genhtml_legend=1 00:31:53.681 --rc geninfo_all_blocks=1 00:31:53.681 --rc geninfo_unexecuted_blocks=1 00:31:53.681 00:31:53.681 ' 00:31:53.681 02:01:02 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.681 02:01:02 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.681 02:01:02 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.681 02:01:02 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.681 02:01:02 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.681 02:01:02 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:31:53.681 02:01:02 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:31:53.681 02:01:02 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:31:53.681 02:01:02 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:53.681 02:01:02 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:54.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:54.261 Waiting for block devices as requested 00:31:54.261 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:54.530 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:54.530 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:54.530 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:59.807 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:59.807 02:01:08 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:31:59.807 02:01:08 nvme_fdp -- scripts/common.sh@18 -- # local i 00:31:59.807 02:01:08 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:31:59.807 02:01:08 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:31:59.807 02:01:08 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:31:59.807 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.808 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.809 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:31:59.810 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:59.811 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:31:59.812 02:01:08 nvme_fdp -- scripts/common.sh@18 -- # local i 00:31:59.812 02:01:08 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:31:59.812 02:01:08 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:31:59.812 02:01:08 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.812 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.813 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:59.814 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:59.815 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:31:59.816 02:01:08 nvme_fdp -- scripts/common.sh@18 -- # local i 00:31:59.816 02:01:08 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:31:59.816 02:01:08 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:31:59.816 02:01:08 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:31:59.816 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:31:59.817 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:31:59.818 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.082 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:32:00.083 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.084 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.085 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.086 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.087 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:00.088 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:32:00.089 02:01:08 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:00.089 02:01:08 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:00.089 02:01:08 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:00.089 02:01:08 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.089 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.090 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.091 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:32:00.092 02:01:08 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:00.092 02:01:08 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:00.092 02:01:09 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:32:00.092 02:01:09 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:32:00.092 02:01:09 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:32:00.092 02:01:09 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:32:00.092 02:01:09 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:32:00.092 02:01:09 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:00.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:01.226 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:01.226 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:01.226 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:01.226 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:01.485 02:01:10 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:32:01.485 02:01:10 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:01.485 02:01:10 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:01.485 02:01:10 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:32:01.485 ************************************ 00:32:01.485 START TEST nvme_flexible_data_placement 00:32:01.485 ************************************ 00:32:01.485 02:01:10 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:32:01.744 Initializing NVMe Controllers 00:32:01.744 Attaching to 0000:00:13.0 00:32:01.744 Controller supports FDP Attached to 0000:00:13.0 00:32:01.744 Namespace ID: 1 Endurance Group ID: 1 00:32:01.744 Initialization complete. 00:32:01.744 00:32:01.744 ================================== 00:32:01.744 == FDP tests for Namespace: #01 == 00:32:01.744 ================================== 00:32:01.744 00:32:01.744 Get Feature: FDP: 00:32:01.744 ================= 00:32:01.744 Enabled: Yes 00:32:01.744 FDP configuration Index: 0 00:32:01.744 00:32:01.744 FDP configurations log page 00:32:01.744 =========================== 00:32:01.744 Number of FDP configurations: 1 00:32:01.744 Version: 0 00:32:01.744 Size: 112 00:32:01.744 FDP Configuration Descriptor: 0 00:32:01.744 Descriptor Size: 96 00:32:01.744 Reclaim Group Identifier format: 2 00:32:01.744 FDP Volatile Write Cache: Not Present 00:32:01.744 FDP Configuration: Valid 00:32:01.744 Vendor Specific Size: 0 00:32:01.744 Number of Reclaim Groups: 2 00:32:01.744 Number of Recalim Unit Handles: 8 00:32:01.744 Max Placement Identifiers: 128 00:32:01.744 Number of Namespaces Suppprted: 256 00:32:01.744 Reclaim unit Nominal Size: 6000000 bytes 00:32:01.744 Estimated Reclaim Unit Time Limit: Not Reported 00:32:01.744 RUH Desc #000: RUH Type: Initially Isolated 00:32:01.744 RUH Desc #001: RUH Type: Initially Isolated 00:32:01.744 RUH Desc #002: RUH Type: Initially Isolated 00:32:01.744 RUH Desc #003: RUH Type: Initially Isolated 00:32:01.744 RUH Desc #004: RUH Type: Initially Isolated 00:32:01.744 RUH Desc #005: RUH Type: Initially Isolated 00:32:01.744 RUH Desc #006: RUH Type: Initially Isolated 00:32:01.744 RUH Desc #007: RUH Type: Initially Isolated 00:32:01.744 00:32:01.744 FDP reclaim unit handle usage log page 00:32:01.744 ====================================== 00:32:01.744 Number of Reclaim Unit Handles: 8 00:32:01.744 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:32:01.744 RUH Usage Desc #001: RUH Attributes: Unused 00:32:01.744 RUH Usage Desc #002: RUH Attributes: Unused 00:32:01.744 RUH Usage Desc #003: RUH Attributes: Unused 00:32:01.744 RUH Usage Desc #004: RUH Attributes: Unused 00:32:01.744 RUH Usage Desc #005: RUH Attributes: Unused 00:32:01.744 RUH Usage Desc #006: RUH Attributes: Unused 00:32:01.744 RUH Usage Desc #007: RUH Attributes: Unused 00:32:01.744 00:32:01.744 FDP statistics log page 00:32:01.744 ======================= 00:32:01.744 Host bytes with metadata written: 833114112 00:32:01.744 Media bytes with metadata written: 833228800 00:32:01.744 Media bytes erased: 0 00:32:01.744 00:32:01.744 FDP Reclaim unit handle status 00:32:01.744 ============================== 00:32:01.744 Number of RUHS descriptors: 2 00:32:01.744 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000457b 00:32:01.744 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:32:01.744 00:32:01.744 FDP write on placement id: 0 success 00:32:01.744 00:32:01.744 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:32:01.744 00:32:01.744 IO mgmt send: RUH update for Placement ID: #0 Success 00:32:01.744 00:32:01.744 Get Feature: FDP Events for Placement handle: #0 00:32:01.744 ======================== 00:32:01.744 Number of FDP Events: 6 00:32:01.744 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:32:01.744 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:32:01.744 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:32:01.744 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:32:01.744 FDP Event: #4 Type: Media Reallocated Enabled: No 00:32:01.744 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:32:01.744 00:32:01.744 FDP events log page 00:32:01.744 =================== 00:32:01.744 Number of FDP events: 1 00:32:01.744 FDP Event #0: 00:32:01.744 Event Type: RU Not Written to Capacity 00:32:01.744 Placement Identifier: Valid 00:32:01.744 NSID: Valid 00:32:01.744 Location: Valid 00:32:01.744 Placement Identifier: 0 00:32:01.744 Event Timestamp: a 00:32:01.744 Namespace Identifier: 1 00:32:01.744 Reclaim Group Identifier: 0 00:32:01.744 Reclaim Unit Handle Identifier: 0 00:32:01.744 00:32:01.744 FDP test passed 00:32:01.744 00:32:01.744 real 0m0.318s 00:32:01.744 user 0m0.113s 00:32:01.744 sys 0m0.103s 00:32:01.744 02:01:10 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:01.744 02:01:10 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:32:01.744 ************************************ 00:32:01.744 END TEST nvme_flexible_data_placement 00:32:01.744 ************************************ 00:32:01.744 00:32:01.744 real 0m8.260s 00:32:01.744 user 0m1.500s 00:32:01.744 sys 0m1.734s 00:32:01.744 02:01:10 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:01.744 02:01:10 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:32:01.744 ************************************ 00:32:01.744 END TEST nvme_fdp 00:32:01.744 ************************************ 00:32:01.744 02:01:10 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:32:01.744 02:01:10 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:01.744 02:01:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:01.744 02:01:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:01.744 02:01:10 -- common/autotest_common.sh@10 -- # set +x 00:32:01.744 ************************************ 00:32:01.744 START TEST nvme_rpc 00:32:01.744 ************************************ 00:32:01.744 02:01:10 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:02.002 * Looking for test storage... 00:32:02.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:02.002 02:01:10 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:02.002 02:01:10 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:32:02.002 02:01:10 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:02.002 02:01:10 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.002 02:01:10 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:32:02.002 02:01:10 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.002 02:01:10 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:02.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.002 --rc genhtml_branch_coverage=1 00:32:02.002 --rc genhtml_function_coverage=1 00:32:02.002 --rc genhtml_legend=1 00:32:02.002 --rc geninfo_all_blocks=1 00:32:02.002 --rc geninfo_unexecuted_blocks=1 00:32:02.002 00:32:02.002 ' 00:32:02.002 02:01:10 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:02.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.003 --rc genhtml_branch_coverage=1 00:32:02.003 --rc genhtml_function_coverage=1 00:32:02.003 --rc genhtml_legend=1 00:32:02.003 --rc geninfo_all_blocks=1 00:32:02.003 --rc geninfo_unexecuted_blocks=1 00:32:02.003 00:32:02.003 ' 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:02.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.003 --rc genhtml_branch_coverage=1 00:32:02.003 --rc genhtml_function_coverage=1 00:32:02.003 --rc genhtml_legend=1 00:32:02.003 --rc geninfo_all_blocks=1 00:32:02.003 --rc geninfo_unexecuted_blocks=1 00:32:02.003 00:32:02.003 ' 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:02.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.003 --rc genhtml_branch_coverage=1 00:32:02.003 --rc genhtml_function_coverage=1 00:32:02.003 --rc genhtml_legend=1 00:32:02.003 --rc geninfo_all_blocks=1 00:32:02.003 --rc geninfo_unexecuted_blocks=1 00:32:02.003 00:32:02.003 ' 00:32:02.003 02:01:10 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:02.003 02:01:10 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:32:02.003 02:01:10 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:32:02.003 02:01:10 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67760 00:32:02.003 02:01:10 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:02.003 02:01:10 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:32:02.003 02:01:10 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67760 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 67760 ']' 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:02.003 02:01:10 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:02.261 [2024-10-15 02:01:11.131180] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:32:02.261 [2024-10-15 02:01:11.131377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67760 ] 00:32:02.519 [2024-10-15 02:01:11.317120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:02.778 [2024-10-15 02:01:11.635154] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.778 [2024-10-15 02:01:11.635163] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.742 02:01:12 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:03.742 02:01:12 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:32:03.742 02:01:12 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:32:04.025 [2024-10-15 02:01:12.856682] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200009617da0 was disconnected and freed. delete nvme_qpair. 00:32:04.025 Nvme0n1 00:32:04.025 02:01:12 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:32:04.025 02:01:12 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:32:04.283 request: 00:32:04.283 { 00:32:04.283 "bdev_name": "Nvme0n1", 00:32:04.283 "filename": "non_existing_file", 00:32:04.283 "method": "bdev_nvme_apply_firmware", 00:32:04.283 "req_id": 1 00:32:04.283 } 00:32:04.283 Got JSON-RPC error response 00:32:04.283 response: 00:32:04.283 { 00:32:04.283 "code": -32603, 00:32:04.283 "message": "open file failed." 00:32:04.283 } 00:32:04.283 [2024-10-15 02:01:13.124367] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200009617da0 was disconnected and freed. delete nvme_qpair. 00:32:04.283 02:01:13 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:32:04.283 02:01:13 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:32:04.283 02:01:13 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:04.542 02:01:13 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:04.542 02:01:13 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67760 00:32:04.542 02:01:13 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 67760 ']' 00:32:04.542 02:01:13 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 67760 00:32:04.542 02:01:13 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:32:04.542 02:01:13 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:04.542 02:01:13 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67760 00:32:04.542 killing process with pid 67760 00:32:04.542 02:01:13 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:04.542 02:01:13 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:04.542 02:01:13 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67760' 00:32:04.542 02:01:13 nvme_rpc -- common/autotest_common.sh@969 -- # kill 67760 00:32:04.542 02:01:13 nvme_rpc -- common/autotest_common.sh@974 -- # wait 67760 00:32:07.075 ************************************ 00:32:07.075 END TEST nvme_rpc 00:32:07.075 ************************************ 00:32:07.075 00:32:07.075 real 0m5.058s 00:32:07.075 user 0m9.198s 00:32:07.075 sys 0m0.841s 00:32:07.075 02:01:15 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:07.075 02:01:15 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:07.075 02:01:15 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:07.075 02:01:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:07.075 02:01:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:07.075 02:01:15 -- common/autotest_common.sh@10 -- # set +x 00:32:07.075 ************************************ 00:32:07.075 START TEST nvme_rpc_timeouts 00:32:07.075 ************************************ 00:32:07.075 02:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:07.075 * Looking for test storage... 00:32:07.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:07.075 02:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:07.075 02:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:07.075 02:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:07.075 02:01:16 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:07.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.075 --rc genhtml_branch_coverage=1 00:32:07.075 --rc genhtml_function_coverage=1 00:32:07.075 --rc genhtml_legend=1 00:32:07.075 --rc geninfo_all_blocks=1 00:32:07.075 --rc geninfo_unexecuted_blocks=1 00:32:07.075 00:32:07.075 ' 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:07.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.075 --rc genhtml_branch_coverage=1 00:32:07.075 --rc genhtml_function_coverage=1 00:32:07.075 --rc genhtml_legend=1 00:32:07.075 --rc geninfo_all_blocks=1 00:32:07.075 --rc geninfo_unexecuted_blocks=1 00:32:07.075 00:32:07.075 ' 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:07.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.075 --rc genhtml_branch_coverage=1 00:32:07.075 --rc genhtml_function_coverage=1 00:32:07.075 --rc genhtml_legend=1 00:32:07.075 --rc geninfo_all_blocks=1 00:32:07.075 --rc geninfo_unexecuted_blocks=1 00:32:07.075 00:32:07.075 ' 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:07.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.075 --rc genhtml_branch_coverage=1 00:32:07.075 --rc genhtml_function_coverage=1 00:32:07.075 --rc genhtml_legend=1 00:32:07.075 --rc geninfo_all_blocks=1 00:32:07.075 --rc geninfo_unexecuted_blocks=1 00:32:07.075 00:32:07.075 ' 00:32:07.075 02:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:07.075 02:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67848 00:32:07.075 02:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67848 00:32:07.075 02:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67881 00:32:07.075 02:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:07.075 02:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:32:07.075 02:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67881 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 67881 ']' 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:07.075 02:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:32:07.334 [2024-10-15 02:01:16.148125] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:32:07.334 [2024-10-15 02:01:16.148318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67881 ] 00:32:07.334 [2024-10-15 02:01:16.326284] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:07.592 [2024-10-15 02:01:16.584551] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.592 [2024-10-15 02:01:16.584567] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.527 02:01:17 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:08.527 Checking default timeout settings: 00:32:08.527 02:01:17 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:32:08.527 02:01:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:32:08.527 02:01:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:09.119 Making settings changes with rpc: 00:32:09.119 02:01:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:32:09.119 02:01:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:32:09.384 Check default vs. modified settings: 00:32:09.384 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:32:09.384 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:09.643 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:32:09.643 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:09.643 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:09.644 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67848 00:32:09.644 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.644 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:32:09.644 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67848 00:32:09.644 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.644 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:09.901 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:32:09.901 Setting action_on_timeout is changed as expected. 00:32:09.901 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67848 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67848 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:32:09.902 Setting timeout_us is changed as expected. 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67848 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67848 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:32:09.902 Setting timeout_admin_us is changed as expected. 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67848 /tmp/settings_modified_67848 00:32:09.902 02:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67881 00:32:09.902 02:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 67881 ']' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 67881 00:32:09.902 02:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:32:09.902 02:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67881 00:32:09.902 02:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:09.902 02:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:09.902 killing process with pid 67881 00:32:09.902 02:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67881' 00:32:09.902 02:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 67881 00:32:09.902 02:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 67881 00:32:12.457 RPC TIMEOUT SETTING TEST PASSED. 00:32:12.457 02:01:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:32:12.457 00:32:12.457 real 0m5.345s 00:32:12.457 user 0m10.167s 00:32:12.457 sys 0m0.806s 00:32:12.457 02:01:21 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:12.457 02:01:21 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:32:12.457 ************************************ 00:32:12.457 END TEST nvme_rpc_timeouts 00:32:12.457 ************************************ 00:32:12.457 02:01:21 -- spdk/autotest.sh@239 -- # uname -s 00:32:12.457 02:01:21 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:32:12.457 02:01:21 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:32:12.457 02:01:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:12.457 02:01:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:12.457 02:01:21 -- common/autotest_common.sh@10 -- # set +x 00:32:12.457 ************************************ 00:32:12.457 START TEST sw_hotplug 00:32:12.457 ************************************ 00:32:12.457 02:01:21 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:32:12.457 * Looking for test storage... 00:32:12.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:12.457 02:01:21 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:12.457 02:01:21 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:32:12.457 02:01:21 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:12.457 02:01:21 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:12.457 02:01:21 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:32:12.457 02:01:21 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:12.457 02:01:21 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:12.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.457 --rc genhtml_branch_coverage=1 00:32:12.457 --rc genhtml_function_coverage=1 00:32:12.457 --rc genhtml_legend=1 00:32:12.457 --rc geninfo_all_blocks=1 00:32:12.457 --rc geninfo_unexecuted_blocks=1 00:32:12.457 00:32:12.457 ' 00:32:12.457 02:01:21 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:12.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.457 --rc genhtml_branch_coverage=1 00:32:12.457 --rc genhtml_function_coverage=1 00:32:12.457 --rc genhtml_legend=1 00:32:12.457 --rc geninfo_all_blocks=1 00:32:12.457 --rc geninfo_unexecuted_blocks=1 00:32:12.457 00:32:12.457 ' 00:32:12.457 02:01:21 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:12.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.457 --rc genhtml_branch_coverage=1 00:32:12.457 --rc genhtml_function_coverage=1 00:32:12.457 --rc genhtml_legend=1 00:32:12.457 --rc geninfo_all_blocks=1 00:32:12.457 --rc geninfo_unexecuted_blocks=1 00:32:12.457 00:32:12.457 ' 00:32:12.457 02:01:21 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:12.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.458 --rc genhtml_branch_coverage=1 00:32:12.458 --rc genhtml_function_coverage=1 00:32:12.458 --rc genhtml_legend=1 00:32:12.458 --rc geninfo_all_blocks=1 00:32:12.458 --rc geninfo_unexecuted_blocks=1 00:32:12.458 00:32:12.458 ' 00:32:12.458 02:01:21 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:13.024 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:13.024 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:13.024 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:13.024 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:13.024 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:13.024 02:01:21 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:32:13.024 02:01:21 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:32:13.024 02:01:21 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:32:13.024 02:01:21 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@233 -- # local class 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:32:13.024 02:01:21 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:32:13.024 02:01:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:13.024 02:01:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:32:13.024 02:01:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:13.024 02:01:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:13.024 02:01:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:13.024 02:01:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:13.024 02:01:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:32:13.024 02:01:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:13.024 02:01:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:32:13.024 02:01:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:32:13.025 02:01:22 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:13.025 02:01:22 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:32:13.025 02:01:22 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:32:13.025 02:01:22 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:13.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:13.592 Waiting for block devices as requested 00:32:13.850 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:13.850 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:13.850 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:14.108 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:19.392 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:19.392 02:01:28 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:32:19.392 02:01:28 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:19.651 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:32:19.651 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:19.651 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:32:19.909 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:32:20.167 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:20.167 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:20.167 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:32:20.167 02:01:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68764 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:32:20.425 02:01:29 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:32:20.425 02:01:29 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:32:20.425 02:01:29 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:32:20.425 02:01:29 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:32:20.425 02:01:29 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:32:20.425 02:01:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:32:20.784 Initializing NVMe Controllers 00:32:20.784 Attaching to 0000:00:10.0 00:32:20.784 Attaching to 0000:00:11.0 00:32:20.784 Attached to 0000:00:10.0 00:32:20.784 Attached to 0000:00:11.0 00:32:20.784 Initialization complete. Starting I/O... 00:32:20.784 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:32:20.784 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:32:20.784 00:32:21.738 QEMU NVMe Ctrl (12340 ): 1024 I/Os completed (+1024) 00:32:21.738 QEMU NVMe Ctrl (12341 ): 1077 I/Os completed (+1077) 00:32:21.738 00:32:22.674 QEMU NVMe Ctrl (12340 ): 2328 I/Os completed (+1304) 00:32:22.674 QEMU NVMe Ctrl (12341 ): 2414 I/Os completed (+1337) 00:32:22.674 00:32:23.610 QEMU NVMe Ctrl (12340 ): 3992 I/Os completed (+1664) 00:32:23.610 QEMU NVMe Ctrl (12341 ): 4145 I/Os completed (+1731) 00:32:23.610 00:32:24.545 QEMU NVMe Ctrl (12340 ): 5780 I/Os completed (+1788) 00:32:24.545 QEMU NVMe Ctrl (12341 ): 5970 I/Os completed (+1825) 00:32:24.545 00:32:25.482 QEMU NVMe Ctrl (12340 ): 7612 I/Os completed (+1832) 00:32:25.482 QEMU NVMe Ctrl (12341 ): 7830 I/Os completed (+1860) 00:32:25.482 00:32:26.417 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:32:26.417 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:26.417 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:26.417 [2024-10-15 02:01:35.252047] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:32:26.417 Controller removed: QEMU NVMe Ctrl (12340 ) 00:32:26.417 [2024-10-15 02:01:35.254058] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.254136] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.254169] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.254198] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:32:26.417 [2024-10-15 02:01:35.257194] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.257257] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.257285] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.257310] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:32:26.417 EAL: Scan for (pci) bus failed. 00:32:26.417 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:26.417 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:26.417 [2024-10-15 02:01:35.279977] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:32:26.417 Controller removed: QEMU NVMe Ctrl (12341 ) 00:32:26.417 [2024-10-15 02:01:35.281807] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.281863] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.281898] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.281925] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:32:26.417 [2024-10-15 02:01:35.284585] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.284636] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.284666] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 [2024-10-15 02:01:35.284688] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:26.417 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:32:26.417 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:32:26.417 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:26.417 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:26.417 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:32:26.676 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:32:26.676 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:26.676 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:26.676 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:26.676 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:32:26.676 Attaching to 0000:00:10.0 00:32:26.676 Attached to 0000:00:10.0 00:32:26.676 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:32:26.676 00:32:26.676 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:32:26.676 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:26.676 02:01:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:32:26.676 Attaching to 0000:00:11.0 00:32:26.676 Attached to 0000:00:11.0 00:32:27.616 QEMU NVMe Ctrl (12340 ): 1709 I/Os completed (+1709) 00:32:27.616 QEMU NVMe Ctrl (12341 ): 1623 I/Os completed (+1623) 00:32:27.616 00:32:28.550 QEMU NVMe Ctrl (12340 ): 3254 I/Os completed (+1545) 00:32:28.550 QEMU NVMe Ctrl (12341 ): 3224 I/Os completed (+1601) 00:32:28.550 00:32:29.486 QEMU NVMe Ctrl (12340 ): 4662 I/Os completed (+1408) 00:32:29.486 QEMU NVMe Ctrl (12341 ): 4737 I/Os completed (+1513) 00:32:29.486 00:32:30.876 QEMU NVMe Ctrl (12340 ): 6282 I/Os completed (+1620) 00:32:30.876 QEMU NVMe Ctrl (12341 ): 6387 I/Os completed (+1650) 00:32:30.876 00:32:31.809 QEMU NVMe Ctrl (12340 ): 8040 I/Os completed (+1758) 00:32:31.809 QEMU NVMe Ctrl (12341 ): 8150 I/Os completed (+1763) 00:32:31.809 00:32:32.742 QEMU NVMe Ctrl (12340 ): 9801 I/Os completed (+1761) 00:32:32.742 QEMU NVMe Ctrl (12341 ): 9999 I/Os completed (+1849) 00:32:32.742 00:32:33.675 QEMU NVMe Ctrl (12340 ): 11570 I/Os completed (+1769) 00:32:33.675 QEMU NVMe Ctrl (12341 ): 11812 I/Os completed (+1813) 00:32:33.675 00:32:34.609 QEMU NVMe Ctrl (12340 ): 13327 I/Os completed (+1757) 00:32:34.609 QEMU NVMe Ctrl (12341 ): 13641 I/Os completed (+1829) 00:32:34.609 00:32:35.547 QEMU NVMe Ctrl (12340 ): 15031 I/Os completed (+1704) 00:32:35.547 QEMU NVMe Ctrl (12341 ): 15383 I/Os completed (+1742) 00:32:35.547 00:32:36.483 QEMU NVMe Ctrl (12340 ): 16778 I/Os completed (+1747) 00:32:36.483 QEMU NVMe Ctrl (12341 ): 17142 I/Os completed (+1759) 00:32:36.483 00:32:37.858 QEMU NVMe Ctrl (12340 ): 18873 I/Os completed (+2095) 00:32:37.858 QEMU NVMe Ctrl (12341 ): 19035 I/Os completed (+1893) 00:32:37.858 00:32:38.794 QEMU NVMe Ctrl (12340 ): 20594 I/Os completed (+1721) 00:32:38.794 QEMU NVMe Ctrl (12341 ): 20821 I/Os completed (+1786) 00:32:38.794 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:38.794 [2024-10-15 02:01:47.586045] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:32:38.794 Controller removed: QEMU NVMe Ctrl (12340 ) 00:32:38.794 [2024-10-15 02:01:47.589605] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.589722] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.589770] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.589807] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:32:38.794 [2024-10-15 02:01:47.593969] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.594048] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.594082] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.594130] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:38.794 [2024-10-15 02:01:47.612631] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:32:38.794 Controller removed: QEMU NVMe Ctrl (12341 ) 00:32:38.794 [2024-10-15 02:01:47.614548] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.614610] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.614647] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.614674] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:32:38.794 [2024-10-15 02:01:47.617341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.617394] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.617441] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 [2024-10-15 02:01:47.617469] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:32:38.794 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:32:38.794 EAL: Scan for (pci) bus failed. 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:32:38.794 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:32:39.064 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:39.064 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:39.064 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:39.064 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:32:39.064 Attaching to 0000:00:10.0 00:32:39.064 Attached to 0000:00:10.0 00:32:39.064 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:32:39.064 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:39.064 02:01:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:32:39.064 Attaching to 0000:00:11.0 00:32:39.064 Attached to 0000:00:11.0 00:32:39.633 QEMU NVMe Ctrl (12340 ): 1210 I/Os completed (+1210) 00:32:39.633 QEMU NVMe Ctrl (12341 ): 1077 I/Os completed (+1077) 00:32:39.633 00:32:40.568 QEMU NVMe Ctrl (12340 ): 2915 I/Os completed (+1705) 00:32:40.568 QEMU NVMe Ctrl (12341 ): 2821 I/Os completed (+1744) 00:32:40.568 00:32:41.504 QEMU NVMe Ctrl (12340 ): 4551 I/Os completed (+1636) 00:32:41.504 QEMU NVMe Ctrl (12341 ): 4509 I/Os completed (+1688) 00:32:41.504 00:32:42.878 QEMU NVMe Ctrl (12340 ): 6186 I/Os completed (+1635) 00:32:42.879 QEMU NVMe Ctrl (12341 ): 6263 I/Os completed (+1754) 00:32:42.879 00:32:43.825 QEMU NVMe Ctrl (12340 ): 7936 I/Os completed (+1750) 00:32:43.825 QEMU NVMe Ctrl (12341 ): 8034 I/Os completed (+1771) 00:32:43.825 00:32:44.756 QEMU NVMe Ctrl (12340 ): 9704 I/Os completed (+1768) 00:32:44.756 QEMU NVMe Ctrl (12341 ): 9819 I/Os completed (+1785) 00:32:44.756 00:32:45.690 QEMU NVMe Ctrl (12340 ): 11426 I/Os completed (+1722) 00:32:45.690 QEMU NVMe Ctrl (12341 ): 11584 I/Os completed (+1765) 00:32:45.690 00:32:46.626 QEMU NVMe Ctrl (12340 ): 13130 I/Os completed (+1704) 00:32:46.626 QEMU NVMe Ctrl (12341 ): 13382 I/Os completed (+1798) 00:32:46.626 00:32:47.561 QEMU NVMe Ctrl (12340 ): 14790 I/Os completed (+1660) 00:32:47.561 QEMU NVMe Ctrl (12341 ): 15105 I/Os completed (+1723) 00:32:47.561 00:32:48.497 QEMU NVMe Ctrl (12340 ): 16393 I/Os completed (+1603) 00:32:48.497 QEMU NVMe Ctrl (12341 ): 16736 I/Os completed (+1631) 00:32:48.497 00:32:49.902 QEMU NVMe Ctrl (12340 ): 17954 I/Os completed (+1561) 00:32:49.902 QEMU NVMe Ctrl (12341 ): 18406 I/Os completed (+1670) 00:32:49.902 00:32:50.836 QEMU NVMe Ctrl (12340 ): 19560 I/Os completed (+1606) 00:32:50.836 QEMU NVMe Ctrl (12341 ): 20110 I/Os completed (+1704) 00:32:50.836 00:32:51.095 02:01:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:32:51.095 02:01:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:32:51.095 02:01:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:51.095 02:01:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:51.095 [2024-10-15 02:01:59.907194] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:32:51.095 Controller removed: QEMU NVMe Ctrl (12340 ) 00:32:51.095 [2024-10-15 02:01:59.909230] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.909302] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.909335] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.909364] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:32:51.095 [2024-10-15 02:01:59.912549] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.912612] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.912641] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.912668] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 02:01:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:51.095 02:01:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:51.095 [2024-10-15 02:01:59.936289] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:32:51.095 Controller removed: QEMU NVMe Ctrl (12341 ) 00:32:51.095 [2024-10-15 02:01:59.938133] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.938194] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.938227] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.938253] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:32:51.095 [2024-10-15 02:01:59.940943] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.940995] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.941028] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 [2024-10-15 02:01:59.941052] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:51.095 02:01:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:32:51.095 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:32:51.095 EAL: Scan for (pci) bus failed. 00:32:51.095 02:01:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:32:51.095 02:02:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:51.095 02:02:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:51.095 02:02:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:32:51.353 02:02:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:32:51.353 02:02:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:51.353 02:02:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:51.353 02:02:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:51.353 02:02:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:32:51.353 Attaching to 0000:00:10.0 00:32:51.353 Attached to 0000:00:10.0 00:32:51.353 02:02:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:32:51.353 02:02:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:51.353 02:02:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:32:51.353 Attaching to 0000:00:11.0 00:32:51.353 Attached to 0000:00:11.0 00:32:51.353 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:32:51.353 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:32:51.353 [2024-10-15 02:02:00.247872] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:33:03.558 02:02:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:03.558 02:02:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:03.558 02:02:12 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.99 00:33:03.558 02:02:12 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.99 00:33:03.558 02:02:12 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:33:03.558 02:02:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.99 00:33:03.558 02:02:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.99 2 00:33:03.558 remove_attach_helper took 42.99s to complete (handling 2 nvme drive(s)) 02:02:12 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:33:10.115 02:02:18 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68764 00:33:10.115 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68764) - No such process 00:33:10.115 02:02:18 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68764 00:33:10.115 02:02:18 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:33:10.115 02:02:18 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:33:10.115 02:02:18 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:33:10.115 02:02:18 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69305 00:33:10.115 02:02:18 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:10.115 02:02:18 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:33:10.115 02:02:18 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69305 00:33:10.115 02:02:18 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 69305 ']' 00:33:10.115 02:02:18 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.115 02:02:18 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:10.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.115 02:02:18 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.115 02:02:18 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:10.115 02:02:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:10.115 [2024-10-15 02:02:18.387592] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:33:10.115 [2024-10-15 02:02:18.387794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69305 ] 00:33:10.115 [2024-10-15 02:02:18.565567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.115 [2024-10-15 02:02:18.873505] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.052 02:02:19 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:11.052 02:02:19 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:33:11.052 02:02:19 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:33:11.052 02:02:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.052 02:02:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:11.052 02:02:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.052 02:02:19 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:33:11.052 02:02:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:33:11.052 02:02:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:33:11.053 02:02:19 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:33:11.053 02:02:19 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:33:11.053 02:02:19 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:33:11.053 02:02:19 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:33:11.053 02:02:19 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:33:11.053 02:02:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:33:11.053 02:02:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:33:11.053 02:02:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:33:11.053 02:02:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:33:11.053 02:02:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:33:11.053 [2024-10-15 02:02:19.951625] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001bc2ada0 was disconnected and freed. delete nvme_qpair. 00:33:11.053 [2024-10-15 02:02:19.953225] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001c436660 was disconnected and freed. delete nvme_qpair. 00:33:17.615 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:17.615 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:17.615 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:17.615 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:17.615 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:17.615 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:33:17.615 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:17.615 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:17.615 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:17.615 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:17.615 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:17.615 02:02:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.615 02:02:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:17.615 02:02:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.615 [2024-10-15 02:02:25.884789] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:33:17.615 [2024-10-15 02:02:25.887925] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:17.615 [2024-10-15 02:02:25.888024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.615 [2024-10-15 02:02:25.888061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.616 [2024-10-15 02:02:25.888122] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:17.616 [2024-10-15 02:02:25.888137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.616 [2024-10-15 02:02:25.888154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.616 [2024-10-15 02:02:25.888169] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:17.616 [2024-10-15 02:02:25.888187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.616 [2024-10-15 02:02:25.888201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.616 [2024-10-15 02:02:25.888219] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:17.616 [2024-10-15 02:02:25.888233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.616 [2024-10-15 02:02:25.888248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.616 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:33:17.616 02:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:33:17.616 [2024-10-15 02:02:26.284762] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:33:17.616 [2024-10-15 02:02:26.288045] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:17.616 [2024-10-15 02:02:26.288110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.616 [2024-10-15 02:02:26.288135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.616 [2024-10-15 02:02:26.288160] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:17.616 [2024-10-15 02:02:26.288177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.616 [2024-10-15 02:02:26.288191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.616 [2024-10-15 02:02:26.288208] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:17.616 [2024-10-15 02:02:26.288221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.616 [2024-10-15 02:02:26.288236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.616 [2024-10-15 02:02:26.288250] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:17.616 [2024-10-15 02:02:26.288283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.616 [2024-10-15 02:02:26.288296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.616 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:33:17.616 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:17.616 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:17.616 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:17.616 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:17.616 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:17.616 02:02:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.616 02:02:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:17.616 02:02:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.616 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:33:17.616 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:17.616 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:17.616 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:17.616 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:17.874 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:17.874 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:17.874 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:17.874 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:17.874 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:17.874 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:17.874 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:17.874 02:02:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:20.406 [2024-10-15 02:02:28.849009] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035024da0 was disconnected and freed. delete nvme_qpair. 00:33:20.406 [2024-10-15 02:02:28.850554] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001c4355a0 was disconnected and freed. delete nvme_qpair. 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:30.423 02:02:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.423 02:02:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:30.423 02:02:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:30.423 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:30.423 02:02:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.423 02:02:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:30.423 02:02:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.423 [2024-10-15 02:02:38.884987] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:33:30.423 [2024-10-15 02:02:38.888157] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:30.423 [2024-10-15 02:02:38.888391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.423 [2024-10-15 02:02:38.888569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.423 [2024-10-15 02:02:38.888828] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:30.423 [2024-10-15 02:02:38.888889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.424 [2024-10-15 02:02:38.889058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.424 [2024-10-15 02:02:38.889216] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:30.424 [2024-10-15 02:02:38.889280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.424 [2024-10-15 02:02:38.889468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.424 [2024-10-15 02:02:38.889717] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:30.424 [2024-10-15 02:02:38.889777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.424 [2024-10-15 02:02:38.890011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.424 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:33:30.424 02:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:33:30.424 [2024-10-15 02:02:39.285002] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:33:30.424 [2024-10-15 02:02:39.288341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:30.424 [2024-10-15 02:02:39.288524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.424 [2024-10-15 02:02:39.288724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.424 [2024-10-15 02:02:39.288952] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:30.424 [2024-10-15 02:02:39.289095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.424 [2024-10-15 02:02:39.289260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.424 [2024-10-15 02:02:39.289463] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:30.424 [2024-10-15 02:02:39.289521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.424 [2024-10-15 02:02:39.289763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.424 [2024-10-15 02:02:39.289914] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:30.424 [2024-10-15 02:02:39.289975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.424 [2024-10-15 02:02:39.290037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.424 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:33:30.424 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:30.424 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:30.424 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:30.424 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:30.424 02:02:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.424 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:30.424 02:02:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:30.424 02:02:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.682 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:33:30.682 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:30.682 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:30.682 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:30.682 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:30.682 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:30.682 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:30.682 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:30.682 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:30.682 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:30.940 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:30.940 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:30.940 02:02:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:32.844 [2024-10-15 02:02:41.847859] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x2000350245a0 was disconnected and freed. delete nvme_qpair. 00:33:32.844 [2024-10-15 02:02:41.849188] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001c40d5a0 was disconnected and freed. delete nvme_qpair. 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:42.817 02:02:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.817 02:02:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:42.817 02:02:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:42.817 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:43.087 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:33:43.087 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:43.087 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:43.087 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:43.087 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:43.087 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:43.087 02:02:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.087 02:02:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:43.087 02:02:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.087 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:33:43.087 02:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:33:43.087 [2024-10-15 02:02:51.885318] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:33:43.087 [2024-10-15 02:02:51.888278] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.087 [2024-10-15 02:02:51.888348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.087 [2024-10-15 02:02:51.888370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.087 [2024-10-15 02:02:51.888400] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.087 [2024-10-15 02:02:51.888430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.087 [2024-10-15 02:02:51.888487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.087 [2024-10-15 02:02:51.888505] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.087 [2024-10-15 02:02:51.888522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.088 [2024-10-15 02:02:51.888536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.088 [2024-10-15 02:02:51.888553] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.088 [2024-10-15 02:02:51.888567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.088 [2024-10-15 02:02:51.888582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.355 [2024-10-15 02:02:52.285339] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:33:43.355 [2024-10-15 02:02:52.288733] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.355 [2024-10-15 02:02:52.288800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.355 [2024-10-15 02:02:52.288830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.355 [2024-10-15 02:02:52.288857] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.355 [2024-10-15 02:02:52.288878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.355 [2024-10-15 02:02:52.288893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.355 [2024-10-15 02:02:52.288926] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.355 [2024-10-15 02:02:52.288939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.355 [2024-10-15 02:02:52.288971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.355 [2024-10-15 02:02:52.288985] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.355 [2024-10-15 02:02:52.289001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.355 [2024-10-15 02:02:52.289015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:43.613 02:02:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.613 02:02:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:43.613 02:02:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:43.613 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:43.872 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:43.872 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:43.872 02:02:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:45.775 [2024-10-15 02:02:54.744285] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035036da0 was disconnected and freed. delete nvme_qpair. 00:33:46.034 [2024-10-15 02:02:54.844970] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20003501f720 was disconnected and freed. delete nvme_qpair. 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.95 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.95 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.95 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.95 2 00:33:56.100 remove_attach_helper took 44.95s to complete (handling 2 nvme drive(s)) 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:33:56.100 02:03:04 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:33:56.100 02:03:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:34:02.694 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:02.694 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:02.694 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:02.694 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:02.694 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:02.694 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:02.694 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:02.694 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:02.694 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:02.694 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:02.694 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:02.694 02:03:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.694 02:03:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:02.694 02:03:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.694 [2024-10-15 02:03:10.872794] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:34:02.694 [2024-10-15 02:03:10.875747] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.694 [2024-10-15 02:03:10.875996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.694 [2024-10-15 02:03:10.876177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.694 [2024-10-15 02:03:10.876343] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.694 [2024-10-15 02:03:10.876372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.694 [2024-10-15 02:03:10.876395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.695 [2024-10-15 02:03:10.876435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.695 [2024-10-15 02:03:10.876459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.695 [2024-10-15 02:03:10.876476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.695 [2024-10-15 02:03:10.876497] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.695 [2024-10-15 02:03:10.876514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.695 [2024-10-15 02:03:10.876533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.695 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:34:02.695 02:03:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:02.695 [2024-10-15 02:03:11.272796] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:34:02.695 [2024-10-15 02:03:11.276193] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.695 [2024-10-15 02:03:11.276243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.695 [2024-10-15 02:03:11.276272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.695 [2024-10-15 02:03:11.276307] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.695 [2024-10-15 02:03:11.276328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.695 [2024-10-15 02:03:11.276345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.695 [2024-10-15 02:03:11.276370] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.695 [2024-10-15 02:03:11.276385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.695 [2024-10-15 02:03:11.276419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.695 [2024-10-15 02:03:11.276442] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:02.695 [2024-10-15 02:03:11.276462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.695 [2024-10-15 02:03:11.276477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:02.695 02:03:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.695 02:03:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:02.695 02:03:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:02.695 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:02.953 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:02.953 02:03:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:04.857 [2024-10-15 02:03:13.731891] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035036da0 was disconnected and freed. delete nvme_qpair. 00:34:04.857 [2024-10-15 02:03:13.829966] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20003501f720 was disconnected and freed. delete nvme_qpair. 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:14.840 02:03:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.840 02:03:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:14.840 02:03:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:14.840 [2024-10-15 02:03:23.772976] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:34:14.840 [2024-10-15 02:03:23.775688] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:14.840 [2024-10-15 02:03:23.775746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.840 [2024-10-15 02:03:23.775769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.840 [2024-10-15 02:03:23.775836] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:14.840 [2024-10-15 02:03:23.775858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.840 [2024-10-15 02:03:23.775875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.840 [2024-10-15 02:03:23.775891] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:14.840 [2024-10-15 02:03:23.775909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.840 [2024-10-15 02:03:23.775938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.840 [2024-10-15 02:03:23.775970] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:14.840 [2024-10-15 02:03:23.775983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.840 [2024-10-15 02:03:23.775999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:14.840 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:14.840 02:03:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.840 02:03:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:14.840 02:03:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.099 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:15.099 02:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:15.357 [2024-10-15 02:03:24.272971] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:34:15.357 [2024-10-15 02:03:24.278284] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.357 [2024-10-15 02:03:24.278329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.357 [2024-10-15 02:03:24.278352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.357 [2024-10-15 02:03:24.278377] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.357 [2024-10-15 02:03:24.278393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.358 [2024-10-15 02:03:24.278420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.358 [2024-10-15 02:03:24.278455] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.358 [2024-10-15 02:03:24.278468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.358 [2024-10-15 02:03:24.278507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.358 [2024-10-15 02:03:24.278522] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:15.358 [2024-10-15 02:03:24.278541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:15.358 [2024-10-15 02:03:24.278553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:15.358 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:15.358 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:15.358 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:15.358 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:15.358 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:15.358 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:15.358 02:03:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.358 02:03:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:15.617 02:03:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.617 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:15.617 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:15.617 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:15.617 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:15.617 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:15.617 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:15.617 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:15.617 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:15.617 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:15.617 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:15.876 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:15.876 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:15.876 02:03:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:17.779 [2024-10-15 02:03:26.730811] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035037da0 was disconnected and freed. delete nvme_qpair. 00:34:18.038 [2024-10-15 02:03:26.830310] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20003501d720 was disconnected and freed. delete nvme_qpair. 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:28.077 02:03:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.077 02:03:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:28.077 02:03:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:28.077 [2024-10-15 02:03:36.773196] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:34:28.077 [2024-10-15 02:03:36.776435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.077 [2024-10-15 02:03:36.776526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.077 [2024-10-15 02:03:36.776549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.077 [2024-10-15 02:03:36.776580] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.077 [2024-10-15 02:03:36.776595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.077 [2024-10-15 02:03:36.776611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.077 [2024-10-15 02:03:36.776627] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.077 [2024-10-15 02:03:36.776643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.077 [2024-10-15 02:03:36.776658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.077 [2024-10-15 02:03:36.776676] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.077 [2024-10-15 02:03:36.776689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.077 [2024-10-15 02:03:36.776710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:28.077 02:03:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.077 02:03:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:28.077 02:03:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:28.077 02:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:28.336 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:28.336 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:28.336 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:28.336 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:28.336 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:28.336 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:28.336 02:03:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.336 02:03:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:28.594 02:03:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.594 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:28.594 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:28.594 [2024-10-15 02:03:37.473209] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:34:28.594 [2024-10-15 02:03:37.475136] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.594 [2024-10-15 02:03:37.475193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.594 [2024-10-15 02:03:37.475215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.594 [2024-10-15 02:03:37.475241] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.594 [2024-10-15 02:03:37.475257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.594 [2024-10-15 02:03:37.475270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.594 [2024-10-15 02:03:37.475286] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.594 [2024-10-15 02:03:37.475298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.594 [2024-10-15 02:03:37.475311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.594 [2024-10-15 02:03:37.475324] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.594 [2024-10-15 02:03:37.475338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.594 [2024-10-15 02:03:37.475349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.160 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:29.160 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:29.160 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:29.160 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:29.160 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:29.160 02:03:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.160 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:29.160 02:03:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:29.160 02:03:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.160 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:29.160 02:03:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:29.160 02:03:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:29.160 02:03:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:29.160 02:03:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:29.160 02:03:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:29.160 02:03:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:29.160 02:03:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:29.160 02:03:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:29.160 02:03:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:29.419 02:03:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:29.419 02:03:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:29.419 02:03:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:31.320 [2024-10-15 02:03:40.231140] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035039da0 was disconnected and freed. delete nvme_qpair. 00:34:31.578 [2024-10-15 02:03:40.332715] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035028da0 was disconnected and freed. delete nvme_qpair. 00:34:41.549 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:41.549 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:41.549 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:41.549 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:41.549 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:41.549 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.549 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:41.549 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.53 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.53 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:34:41.549 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.53 00:34:41.549 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.53 2 00:34:41.549 remove_attach_helper took 45.53s to complete (handling 2 nvme drive(s)) 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:34:41.549 02:03:50 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69305 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 69305 ']' 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 69305 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69305 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69305' 00:34:41.549 killing process with pid 69305 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@969 -- # kill 69305 00:34:41.549 02:03:50 sw_hotplug -- common/autotest_common.sh@974 -- # wait 69305 00:34:44.115 02:03:52 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:44.115 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:44.683 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:44.683 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:44.683 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:34:44.683 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:34:44.683 00:34:44.683 real 2m32.426s 00:34:44.683 user 1m52.442s 00:34:44.683 sys 0m19.636s 00:34:44.683 02:03:53 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:44.683 02:03:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:44.683 ************************************ 00:34:44.683 END TEST sw_hotplug 00:34:44.683 ************************************ 00:34:44.942 02:03:53 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:34:44.942 02:03:53 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:34:44.942 02:03:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:44.942 02:03:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:44.942 02:03:53 -- common/autotest_common.sh@10 -- # set +x 00:34:44.942 ************************************ 00:34:44.942 START TEST nvme_xnvme 00:34:44.942 ************************************ 00:34:44.942 02:03:53 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:34:44.942 * Looking for test storage... 00:34:44.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:34:44.942 02:03:53 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:44.942 02:03:53 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:34:44.942 02:03:53 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:44.942 02:03:53 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:44.942 02:03:53 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:34:44.943 02:03:53 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:44.943 02:03:53 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:44.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.943 --rc genhtml_branch_coverage=1 00:34:44.943 --rc genhtml_function_coverage=1 00:34:44.943 --rc genhtml_legend=1 00:34:44.943 --rc geninfo_all_blocks=1 00:34:44.943 --rc geninfo_unexecuted_blocks=1 00:34:44.943 00:34:44.943 ' 00:34:44.943 02:03:53 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:44.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.943 --rc genhtml_branch_coverage=1 00:34:44.943 --rc genhtml_function_coverage=1 00:34:44.943 --rc genhtml_legend=1 00:34:44.943 --rc geninfo_all_blocks=1 00:34:44.943 --rc geninfo_unexecuted_blocks=1 00:34:44.943 00:34:44.943 ' 00:34:44.943 02:03:53 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:44.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.943 --rc genhtml_branch_coverage=1 00:34:44.943 --rc genhtml_function_coverage=1 00:34:44.943 --rc genhtml_legend=1 00:34:44.943 --rc geninfo_all_blocks=1 00:34:44.943 --rc geninfo_unexecuted_blocks=1 00:34:44.943 00:34:44.943 ' 00:34:44.943 02:03:53 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:44.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.943 --rc genhtml_branch_coverage=1 00:34:44.943 --rc genhtml_function_coverage=1 00:34:44.943 --rc genhtml_legend=1 00:34:44.943 --rc geninfo_all_blocks=1 00:34:44.943 --rc geninfo_unexecuted_blocks=1 00:34:44.943 00:34:44.943 ' 00:34:44.943 02:03:53 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.943 02:03:53 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.943 02:03:53 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.943 02:03:53 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.943 02:03:53 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.943 02:03:53 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:34:44.943 02:03:53 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.943 02:03:53 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:34:44.943 02:03:53 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:44.943 02:03:53 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:44.943 02:03:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:44.943 ************************************ 00:34:44.943 START TEST xnvme_to_malloc_dd_copy 00:34:44.943 ************************************ 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:34:44.943 02:03:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:34:45.202 { 00:34:45.202 "subsystems": [ 00:34:45.202 { 00:34:45.202 "subsystem": "bdev", 00:34:45.202 "config": [ 00:34:45.202 { 00:34:45.202 "params": { 00:34:45.202 "block_size": 512, 00:34:45.202 "num_blocks": 2097152, 00:34:45.202 "name": "malloc0" 00:34:45.202 }, 00:34:45.202 "method": "bdev_malloc_create" 00:34:45.202 }, 00:34:45.202 { 00:34:45.202 "params": { 00:34:45.202 "io_mechanism": "libaio", 00:34:45.202 "filename": "/dev/nullb0", 00:34:45.202 "name": "null0" 00:34:45.202 }, 00:34:45.202 "method": "bdev_xnvme_create" 00:34:45.202 }, 00:34:45.202 { 00:34:45.202 "method": "bdev_wait_for_examine" 00:34:45.202 } 00:34:45.202 ] 00:34:45.202 } 00:34:45.202 ] 00:34:45.202 } 00:34:45.202 [2024-10-15 02:03:54.062499] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:34:45.202 [2024-10-15 02:03:54.062726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70679 ] 00:34:45.461 [2024-10-15 02:03:54.239692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.720 [2024-10-15 02:03:54.490136] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.252  [2024-10-15T02:03:57.831Z] Copying: 219/1024 [MB] (219 MBps) [2024-10-15T02:03:58.771Z] Copying: 441/1024 [MB] (222 MBps) [2024-10-15T02:03:59.707Z] Copying: 649/1024 [MB] (207 MBps) [2024-10-15T02:04:00.644Z] Copying: 866/1024 [MB] (216 MBps) [2024-10-15T02:04:03.930Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:34:54.918 00:34:54.918 02:04:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:34:54.918 02:04:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:34:54.918 02:04:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:34:54.918 02:04:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:34:54.918 { 00:34:54.918 "subsystems": [ 00:34:54.918 { 00:34:54.918 "subsystem": "bdev", 00:34:54.918 "config": [ 00:34:54.918 { 00:34:54.918 "params": { 00:34:54.918 "block_size": 512, 00:34:54.918 "num_blocks": 2097152, 00:34:54.918 "name": "malloc0" 00:34:54.918 }, 00:34:54.918 "method": "bdev_malloc_create" 00:34:54.918 }, 00:34:54.918 { 00:34:54.918 "params": { 00:34:54.918 "io_mechanism": "libaio", 00:34:54.918 "filename": "/dev/nullb0", 00:34:54.918 "name": "null0" 00:34:54.918 }, 00:34:54.918 "method": "bdev_xnvme_create" 00:34:54.918 }, 00:34:54.918 { 00:34:54.918 "method": "bdev_wait_for_examine" 00:34:54.918 } 00:34:54.918 ] 00:34:54.918 } 00:34:54.918 ] 00:34:54.918 } 00:34:54.918 [2024-10-15 02:04:03.701869] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:34:54.918 [2024-10-15 02:04:03.702061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70784 ] 00:34:54.918 [2024-10-15 02:04:03.875968] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.177 [2024-10-15 02:04:04.087428] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.711  [2024-10-15T02:04:07.706Z] Copying: 227/1024 [MB] (227 MBps) [2024-10-15T02:04:08.648Z] Copying: 451/1024 [MB] (224 MBps) [2024-10-15T02:04:09.584Z] Copying: 669/1024 [MB] (217 MBps) [2024-10-15T02:04:10.152Z] Copying: 898/1024 [MB] (229 MBps) [2024-10-15T02:04:13.438Z] Copying: 1024/1024 [MB] (average 225 MBps) 00:35:04.426 00:35:04.426 02:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:35:04.426 02:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:35:04.426 02:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:35:04.426 02:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:35:04.426 02:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:35:04.426 02:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:04.426 { 00:35:04.426 "subsystems": [ 00:35:04.426 { 00:35:04.426 "subsystem": "bdev", 00:35:04.426 "config": [ 00:35:04.426 { 00:35:04.426 "params": { 00:35:04.426 "block_size": 512, 00:35:04.426 "num_blocks": 2097152, 00:35:04.426 "name": "malloc0" 00:35:04.426 }, 00:35:04.426 "method": "bdev_malloc_create" 00:35:04.426 }, 00:35:04.426 { 00:35:04.426 "params": { 00:35:04.426 "io_mechanism": "io_uring", 00:35:04.426 "filename": "/dev/nullb0", 00:35:04.426 "name": "null0" 00:35:04.426 }, 00:35:04.426 "method": "bdev_xnvme_create" 00:35:04.426 }, 00:35:04.426 { 00:35:04.426 "method": "bdev_wait_for_examine" 00:35:04.426 } 00:35:04.426 ] 00:35:04.426 } 00:35:04.426 ] 00:35:04.426 } 00:35:04.426 [2024-10-15 02:04:13.014485] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:35:04.426 [2024-10-15 02:04:13.014663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70888 ] 00:35:04.426 [2024-10-15 02:04:13.188654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.426 [2024-10-15 02:04:13.379413] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.962  [2024-10-15T02:04:16.911Z] Copying: 235/1024 [MB] (235 MBps) [2024-10-15T02:04:17.846Z] Copying: 472/1024 [MB] (236 MBps) [2024-10-15T02:04:18.781Z] Copying: 708/1024 [MB] (236 MBps) [2024-10-15T02:04:19.039Z] Copying: 943/1024 [MB] (235 MBps) [2024-10-15T02:04:22.350Z] Copying: 1024/1024 [MB] (average 235 MBps) 00:35:13.338 00:35:13.338 02:04:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:35:13.338 02:04:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:35:13.338 02:04:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:35:13.338 02:04:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:13.338 { 00:35:13.338 "subsystems": [ 00:35:13.338 { 00:35:13.338 "subsystem": "bdev", 00:35:13.338 "config": [ 00:35:13.338 { 00:35:13.338 "params": { 00:35:13.338 "block_size": 512, 00:35:13.339 "num_blocks": 2097152, 00:35:13.339 "name": "malloc0" 00:35:13.339 }, 00:35:13.339 "method": "bdev_malloc_create" 00:35:13.339 }, 00:35:13.339 { 00:35:13.339 "params": { 00:35:13.339 "io_mechanism": "io_uring", 00:35:13.339 "filename": "/dev/nullb0", 00:35:13.339 "name": "null0" 00:35:13.339 }, 00:35:13.339 "method": "bdev_xnvme_create" 00:35:13.339 }, 00:35:13.339 { 00:35:13.339 "method": "bdev_wait_for_examine" 00:35:13.339 } 00:35:13.339 ] 00:35:13.339 } 00:35:13.339 ] 00:35:13.339 } 00:35:13.339 [2024-10-15 02:04:22.109120] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:35:13.339 [2024-10-15 02:04:22.109336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70992 ] 00:35:13.339 [2024-10-15 02:04:22.282107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.598 [2024-10-15 02:04:22.467753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.129  [2024-10-15T02:04:25.708Z] Copying: 243/1024 [MB] (243 MBps) [2024-10-15T02:04:27.085Z] Copying: 488/1024 [MB] (244 MBps) [2024-10-15T02:04:27.652Z] Copying: 733/1024 [MB] (244 MBps) [2024-10-15T02:04:27.911Z] Copying: 976/1024 [MB] (243 MBps) [2024-10-15T02:04:31.208Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:35:22.196 00:35:22.196 02:04:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:35:22.196 02:04:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:35:22.196 00:35:22.196 real 0m37.040s 00:35:22.196 user 0m31.783s 00:35:22.196 sys 0m4.736s 00:35:22.196 02:04:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:22.196 02:04:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:22.196 ************************************ 00:35:22.196 END TEST xnvme_to_malloc_dd_copy 00:35:22.196 ************************************ 00:35:22.196 02:04:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:35:22.196 02:04:31 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:22.196 02:04:31 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:22.196 02:04:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:22.196 ************************************ 00:35:22.196 START TEST xnvme_bdevperf 00:35:22.196 ************************************ 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:35:22.196 02:04:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:22.196 { 00:35:22.196 "subsystems": [ 00:35:22.196 { 00:35:22.196 "subsystem": "bdev", 00:35:22.196 "config": [ 00:35:22.196 { 00:35:22.196 "params": { 00:35:22.196 "io_mechanism": "libaio", 00:35:22.196 "filename": "/dev/nullb0", 00:35:22.196 "name": "null0" 00:35:22.196 }, 00:35:22.196 "method": "bdev_xnvme_create" 00:35:22.196 }, 00:35:22.196 { 00:35:22.196 "method": "bdev_wait_for_examine" 00:35:22.196 } 00:35:22.196 ] 00:35:22.196 } 00:35:22.196 ] 00:35:22.196 } 00:35:22.196 [2024-10-15 02:04:31.152051] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:35:22.196 [2024-10-15 02:04:31.152231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71119 ] 00:35:22.455 [2024-10-15 02:04:31.321871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.714 [2024-10-15 02:04:31.515528] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.972 Running I/O for 5 seconds... 00:35:24.844 154880.00 IOPS, 605.00 MiB/s [2024-10-15T02:04:35.233Z] 155072.00 IOPS, 605.75 MiB/s [2024-10-15T02:04:36.169Z] 154709.33 IOPS, 604.33 MiB/s [2024-10-15T02:04:37.106Z] 155104.00 IOPS, 605.88 MiB/s [2024-10-15T02:04:37.106Z] 155225.60 IOPS, 606.35 MiB/s 00:35:28.094 Latency(us) 00:35:28.094 [2024-10-15T02:04:37.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.094 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:35:28.094 null0 : 5.00 155155.83 606.08 0.00 0.00 409.94 351.88 2174.60 00:35:28.094 [2024-10-15T02:04:37.106Z] =================================================================================================================== 00:35:28.094 [2024-10-15T02:04:37.106Z] Total : 155155.83 606.08 0.00 0.00 409.94 351.88 2174.60 00:35:29.031 02:04:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:35:29.031 02:04:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:35:29.031 02:04:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:35:29.031 02:04:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:35:29.031 02:04:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:35:29.031 02:04:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.031 { 00:35:29.031 "subsystems": [ 00:35:29.031 { 00:35:29.031 "subsystem": "bdev", 00:35:29.031 "config": [ 00:35:29.031 { 00:35:29.031 "params": { 00:35:29.031 "io_mechanism": "io_uring", 00:35:29.031 "filename": "/dev/nullb0", 00:35:29.031 "name": "null0" 00:35:29.031 }, 00:35:29.031 "method": "bdev_xnvme_create" 00:35:29.031 }, 00:35:29.031 { 00:35:29.031 "method": "bdev_wait_for_examine" 00:35:29.031 } 00:35:29.031 ] 00:35:29.031 } 00:35:29.031 ] 00:35:29.031 } 00:35:29.031 [2024-10-15 02:04:37.960248] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:35:29.031 [2024-10-15 02:04:37.960465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71195 ] 00:35:29.289 [2024-10-15 02:04:38.135839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.548 [2024-10-15 02:04:38.323954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.807 Running I/O for 5 seconds... 00:35:31.772 195712.00 IOPS, 764.50 MiB/s [2024-10-15T02:04:41.720Z] 195584.00 IOPS, 764.00 MiB/s [2024-10-15T02:04:42.655Z] 196544.00 IOPS, 767.75 MiB/s [2024-10-15T02:04:44.032Z] 197168.00 IOPS, 770.19 MiB/s 00:35:35.020 Latency(us) 00:35:35.020 [2024-10-15T02:04:44.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.020 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:35:35.020 null0 : 5.00 197094.42 769.90 0.00 0.00 322.27 192.70 2472.49 00:35:35.020 [2024-10-15T02:04:44.032Z] =================================================================================================================== 00:35:35.020 [2024-10-15T02:04:44.032Z] Total : 197094.42 769.90 0.00 0.00 322.27 192.70 2472.49 00:35:35.588 02:04:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:35:35.588 02:04:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:35:35.847 00:35:35.847 real 0m13.604s 00:35:35.847 user 0m10.563s 00:35:35.847 sys 0m2.811s 00:35:35.847 02:04:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:35.847 02:04:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.847 ************************************ 00:35:35.847 END TEST xnvme_bdevperf 00:35:35.847 ************************************ 00:35:35.847 00:35:35.847 real 0m50.951s 00:35:35.847 user 0m42.498s 00:35:35.847 sys 0m7.689s 00:35:35.847 02:04:44 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:35.847 ************************************ 00:35:35.847 02:04:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:35.847 END TEST nvme_xnvme 00:35:35.847 ************************************ 00:35:35.847 02:04:44 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:35:35.847 02:04:44 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:35.847 02:04:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:35.847 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:35:35.847 ************************************ 00:35:35.847 START TEST blockdev_xnvme 00:35:35.847 ************************************ 00:35:35.847 02:04:44 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:35:35.847 * Looking for test storage... 00:35:35.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:35:35.847 02:04:44 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:35.847 02:04:44 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:35:35.847 02:04:44 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:36.105 02:04:44 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:36.105 02:04:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:35:36.106 02:04:44 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:35:36.106 02:04:44 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:35:36.106 02:04:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:35:36.106 02:04:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:36.106 02:04:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:35:36.106 02:04:44 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:35:36.106 02:04:44 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:36.106 02:04:44 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:36.106 02:04:44 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:35:36.106 02:04:44 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:36.106 02:04:44 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:36.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.106 --rc genhtml_branch_coverage=1 00:35:36.106 --rc genhtml_function_coverage=1 00:35:36.106 --rc genhtml_legend=1 00:35:36.106 --rc geninfo_all_blocks=1 00:35:36.106 --rc geninfo_unexecuted_blocks=1 00:35:36.106 00:35:36.106 ' 00:35:36.106 02:04:44 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:36.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.106 --rc genhtml_branch_coverage=1 00:35:36.106 --rc genhtml_function_coverage=1 00:35:36.106 --rc genhtml_legend=1 00:35:36.106 --rc geninfo_all_blocks=1 00:35:36.106 --rc geninfo_unexecuted_blocks=1 00:35:36.106 00:35:36.106 ' 00:35:36.106 02:04:44 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:36.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.106 --rc genhtml_branch_coverage=1 00:35:36.106 --rc genhtml_function_coverage=1 00:35:36.106 --rc genhtml_legend=1 00:35:36.106 --rc geninfo_all_blocks=1 00:35:36.106 --rc geninfo_unexecuted_blocks=1 00:35:36.106 00:35:36.106 ' 00:35:36.106 02:04:44 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:36.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.106 --rc genhtml_branch_coverage=1 00:35:36.106 --rc genhtml_function_coverage=1 00:35:36.106 --rc genhtml_legend=1 00:35:36.106 --rc geninfo_all_blocks=1 00:35:36.106 --rc geninfo_unexecuted_blocks=1 00:35:36.106 00:35:36.106 ' 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71341 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:35:36.106 02:04:44 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71341 00:35:36.106 02:04:44 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 71341 ']' 00:35:36.106 02:04:44 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.106 02:04:44 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:36.106 02:04:44 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.106 02:04:44 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:36.106 02:04:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:36.106 [2024-10-15 02:04:45.058298] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:35:36.106 [2024-10-15 02:04:45.058509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71341 ] 00:35:36.363 [2024-10-15 02:04:45.232356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.621 [2024-10-15 02:04:45.415452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.207 02:04:46 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:37.207 02:04:46 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:35:37.207 02:04:46 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:35:37.207 02:04:46 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:35:37.207 02:04:46 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:35:37.207 02:04:46 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:35:37.207 02:04:46 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:37.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:37.776 Waiting for block devices as requested 00:35:37.776 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:38.035 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:38.035 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:38.035 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:43.305 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:35:43.305 02:04:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:43.305 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:35:43.306 nvme0n1 00:35:43.306 nvme1n1 00:35:43.306 nvme2n1 00:35:43.306 nvme2n2 00:35:43.306 nvme2n3 00:35:43.306 nvme3n1 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:43.306 02:04:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e5cedbd8-539c-44a1-bec1-e3116d48c9ce"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e5cedbd8-539c-44a1-bec1-e3116d48c9ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "737ca16a-ce9e-42dc-9f98-0a133713593d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "737ca16a-ce9e-42dc-9f98-0a133713593d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "90344a29-d16e-4023-ac1d-5771ea4bebc9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "90344a29-d16e-4023-ac1d-5771ea4bebc9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "349feba4-b17d-4c70-9c07-64100b0a55e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "349feba4-b17d-4c70-9c07-64100b0a55e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "23853429-c2a3-4c32-8c4b-1d97f1588276"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23853429-c2a3-4c32-8c4b-1d97f1588276",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "98382da9-bd9e-422f-bee7-1ed335c4195b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "98382da9-bd9e-422f-bee7-1ed335c4195b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:35:43.306 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:35:43.597 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:35:43.597 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:35:43.597 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:35:43.597 02:04:52 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71341 00:35:43.597 02:04:52 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 71341 ']' 00:35:43.597 02:04:52 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 71341 00:35:43.597 02:04:52 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:35:43.597 02:04:52 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:43.597 02:04:52 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71341 00:35:43.597 02:04:52 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:43.597 02:04:52 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:43.597 killing process with pid 71341 00:35:43.597 02:04:52 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71341' 00:35:43.597 02:04:52 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 71341 00:35:43.597 02:04:52 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 71341 00:35:45.500 02:04:54 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:45.500 02:04:54 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:35:45.500 02:04:54 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:35:45.500 02:04:54 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:45.500 02:04:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:45.500 ************************************ 00:35:45.500 START TEST bdev_hello_world 00:35:45.500 ************************************ 00:35:45.500 02:04:54 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:35:45.500 [2024-10-15 02:04:54.444846] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:35:45.500 [2024-10-15 02:04:54.445029] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71718 ] 00:35:45.759 [2024-10-15 02:04:54.616718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.018 [2024-10-15 02:04:54.803005] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.276 [2024-10-15 02:04:55.183036] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:35:46.276 [2024-10-15 02:04:55.183090] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:35:46.276 [2024-10-15 02:04:55.183126] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:35:46.276 [2024-10-15 02:04:55.185337] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:35:46.276 [2024-10-15 02:04:55.185762] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:35:46.276 [2024-10-15 02:04:55.185803] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:35:46.276 [2024-10-15 02:04:55.186049] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:35:46.276 00:35:46.276 [2024-10-15 02:04:55.186093] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:35:47.212 00:35:47.212 real 0m1.820s 00:35:47.212 user 0m1.450s 00:35:47.212 sys 0m0.257s 00:35:47.212 02:04:56 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:47.212 02:04:56 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:35:47.212 ************************************ 00:35:47.212 END TEST bdev_hello_world 00:35:47.212 ************************************ 00:35:47.212 02:04:56 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:35:47.212 02:04:56 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:47.212 02:04:56 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:47.212 02:04:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:47.212 ************************************ 00:35:47.212 START TEST bdev_bounds 00:35:47.212 ************************************ 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71749 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71749' 00:35:47.212 Process bdevio pid: 71749 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71749 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71749 ']' 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:47.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:47.212 02:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:47.471 [2024-10-15 02:04:56.344274] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:35:47.471 [2024-10-15 02:04:56.344525] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71749 ] 00:35:47.729 [2024-10-15 02:04:56.527170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:47.729 [2024-10-15 02:04:56.715213] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.729 [2024-10-15 02:04:56.715316] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.729 [2024-10-15 02:04:56.715320] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:48.296 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:48.296 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:35:48.296 02:04:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:48.554 I/O targets: 00:35:48.554 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:35:48.554 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:35:48.554 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:48.554 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:48.554 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:48.554 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:35:48.554 00:35:48.554 00:35:48.554 CUnit - A unit testing framework for C - Version 2.1-3 00:35:48.554 http://cunit.sourceforge.net/ 00:35:48.554 00:35:48.554 00:35:48.554 Suite: bdevio tests on: nvme3n1 00:35:48.554 Test: blockdev write read block ...passed 00:35:48.554 Test: blockdev write zeroes read block ...passed 00:35:48.554 Test: blockdev write zeroes read no split ...passed 00:35:48.554 Test: blockdev write zeroes read split ...passed 00:35:48.554 Test: blockdev write zeroes read split partial ...passed 00:35:48.554 Test: blockdev reset ...passed 00:35:48.554 Test: blockdev write read 8 blocks ...passed 00:35:48.554 Test: blockdev write read size > 128k ...passed 00:35:48.554 Test: blockdev write read invalid size ...passed 00:35:48.554 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.554 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.554 Test: blockdev write read max offset ...passed 00:35:48.554 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.554 Test: blockdev writev readv 8 blocks ...passed 00:35:48.554 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.554 Test: blockdev writev readv block ...passed 00:35:48.554 Test: blockdev writev readv size > 128k ...passed 00:35:48.554 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.554 Test: blockdev comparev and writev ...passed 00:35:48.554 Test: blockdev nvme passthru rw ...passed 00:35:48.554 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.554 Test: blockdev nvme admin passthru ...passed 00:35:48.554 Test: blockdev copy ...passed 00:35:48.554 Suite: bdevio tests on: nvme2n3 00:35:48.554 Test: blockdev write read block ...passed 00:35:48.554 Test: blockdev write zeroes read block ...passed 00:35:48.554 Test: blockdev write zeroes read no split ...passed 00:35:48.554 Test: blockdev write zeroes read split ...passed 00:35:48.554 Test: blockdev write zeroes read split partial ...passed 00:35:48.554 Test: blockdev reset ...passed 00:35:48.555 Test: blockdev write read 8 blocks ...passed 00:35:48.555 Test: blockdev write read size > 128k ...passed 00:35:48.555 Test: blockdev write read invalid size ...passed 00:35:48.555 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.555 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.555 Test: blockdev write read max offset ...passed 00:35:48.555 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.555 Test: blockdev writev readv 8 blocks ...passed 00:35:48.555 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.555 Test: blockdev writev readv block ...passed 00:35:48.555 Test: blockdev writev readv size > 128k ...passed 00:35:48.555 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.555 Test: blockdev comparev and writev ...passed 00:35:48.555 Test: blockdev nvme passthru rw ...passed 00:35:48.555 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.555 Test: blockdev nvme admin passthru ...passed 00:35:48.555 Test: blockdev copy ...passed 00:35:48.555 Suite: bdevio tests on: nvme2n2 00:35:48.555 Test: blockdev write read block ...passed 00:35:48.555 Test: blockdev write zeroes read block ...passed 00:35:48.555 Test: blockdev write zeroes read no split ...passed 00:35:48.555 Test: blockdev write zeroes read split ...passed 00:35:48.555 Test: blockdev write zeroes read split partial ...passed 00:35:48.555 Test: blockdev reset ...passed 00:35:48.555 Test: blockdev write read 8 blocks ...passed 00:35:48.555 Test: blockdev write read size > 128k ...passed 00:35:48.555 Test: blockdev write read invalid size ...passed 00:35:48.555 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.555 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.555 Test: blockdev write read max offset ...passed 00:35:48.555 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.555 Test: blockdev writev readv 8 blocks ...passed 00:35:48.555 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.555 Test: blockdev writev readv block ...passed 00:35:48.555 Test: blockdev writev readv size > 128k ...passed 00:35:48.555 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.555 Test: blockdev comparev and writev ...passed 00:35:48.555 Test: blockdev nvme passthru rw ...passed 00:35:48.555 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.555 Test: blockdev nvme admin passthru ...passed 00:35:48.555 Test: blockdev copy ...passed 00:35:48.555 Suite: bdevio tests on: nvme2n1 00:35:48.555 Test: blockdev write read block ...passed 00:35:48.555 Test: blockdev write zeroes read block ...passed 00:35:48.814 Test: blockdev write zeroes read no split ...passed 00:35:48.814 Test: blockdev write zeroes read split ...passed 00:35:48.814 Test: blockdev write zeroes read split partial ...passed 00:35:48.814 Test: blockdev reset ...passed 00:35:48.814 Test: blockdev write read 8 blocks ...passed 00:35:48.814 Test: blockdev write read size > 128k ...passed 00:35:48.814 Test: blockdev write read invalid size ...passed 00:35:48.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.814 Test: blockdev write read max offset ...passed 00:35:48.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.814 Test: blockdev writev readv 8 blocks ...passed 00:35:48.814 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.814 Test: blockdev writev readv block ...passed 00:35:48.814 Test: blockdev writev readv size > 128k ...passed 00:35:48.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.814 Test: blockdev comparev and writev ...passed 00:35:48.814 Test: blockdev nvme passthru rw ...passed 00:35:48.814 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.814 Test: blockdev nvme admin passthru ...passed 00:35:48.814 Test: blockdev copy ...passed 00:35:48.814 Suite: bdevio tests on: nvme1n1 00:35:48.814 Test: blockdev write read block ...passed 00:35:48.814 Test: blockdev write zeroes read block ...passed 00:35:48.814 Test: blockdev write zeroes read no split ...passed 00:35:48.814 Test: blockdev write zeroes read split ...passed 00:35:48.814 Test: blockdev write zeroes read split partial ...passed 00:35:48.814 Test: blockdev reset ...passed 00:35:48.814 Test: blockdev write read 8 blocks ...passed 00:35:48.814 Test: blockdev write read size > 128k ...passed 00:35:48.814 Test: blockdev write read invalid size ...passed 00:35:48.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.814 Test: blockdev write read max offset ...passed 00:35:48.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.814 Test: blockdev writev readv 8 blocks ...passed 00:35:48.814 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.814 Test: blockdev writev readv block ...passed 00:35:48.814 Test: blockdev writev readv size > 128k ...passed 00:35:48.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.814 Test: blockdev comparev and writev ...passed 00:35:48.814 Test: blockdev nvme passthru rw ...passed 00:35:48.814 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.814 Test: blockdev nvme admin passthru ...passed 00:35:48.814 Test: blockdev copy ...passed 00:35:48.814 Suite: bdevio tests on: nvme0n1 00:35:48.814 Test: blockdev write read block ...passed 00:35:48.814 Test: blockdev write zeroes read block ...passed 00:35:48.814 Test: blockdev write zeroes read no split ...passed 00:35:48.814 Test: blockdev write zeroes read split ...passed 00:35:48.814 Test: blockdev write zeroes read split partial ...passed 00:35:48.814 Test: blockdev reset ...passed 00:35:48.814 Test: blockdev write read 8 blocks ...passed 00:35:48.814 Test: blockdev write read size > 128k ...passed 00:35:48.814 Test: blockdev write read invalid size ...passed 00:35:48.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.814 Test: blockdev write read max offset ...passed 00:35:48.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.814 Test: blockdev writev readv 8 blocks ...passed 00:35:48.814 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.814 Test: blockdev writev readv block ...passed 00:35:48.814 Test: blockdev writev readv size > 128k ...passed 00:35:48.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.814 Test: blockdev comparev and writev ...passed 00:35:48.814 Test: blockdev nvme passthru rw ...passed 00:35:48.814 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.814 Test: blockdev nvme admin passthru ...passed 00:35:48.814 Test: blockdev copy ...passed 00:35:48.814 00:35:48.814 Run Summary: Type Total Ran Passed Failed Inactive 00:35:48.814 suites 6 6 n/a 0 0 00:35:48.814 tests 138 138 138 0 0 00:35:48.814 asserts 780 780 780 0 n/a 00:35:48.814 00:35:48.814 Elapsed time = 1.078 seconds 00:35:48.814 0 00:35:48.814 02:04:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71749 00:35:48.814 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71749 ']' 00:35:48.814 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71749 00:35:48.814 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:35:48.814 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:48.814 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71749 00:35:49.073 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:49.073 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:49.073 killing process with pid 71749 00:35:49.073 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71749' 00:35:49.073 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71749 00:35:49.073 02:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71749 00:35:50.009 02:04:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:35:50.009 00:35:50.009 real 0m2.640s 00:35:50.009 user 0m6.198s 00:35:50.009 sys 0m0.425s 00:35:50.009 02:04:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:50.009 02:04:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:50.009 ************************************ 00:35:50.009 END TEST bdev_bounds 00:35:50.009 ************************************ 00:35:50.009 02:04:58 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:35:50.009 02:04:58 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:35:50.009 02:04:58 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:50.009 02:04:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:50.009 ************************************ 00:35:50.009 START TEST bdev_nbd 00:35:50.009 ************************************ 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71814 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71814 /var/tmp/spdk-nbd.sock 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71814 ']' 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:50.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:50.009 02:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:50.009 [2024-10-15 02:04:59.001660] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:35:50.009 [2024-10-15 02:04:59.001803] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:50.267 [2024-10-15 02:04:59.162478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.525 [2024-10-15 02:04:59.358201] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:51.091 02:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:51.350 1+0 records in 00:35:51.350 1+0 records out 00:35:51.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00076424 s, 5.4 MB/s 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:51.350 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:51.351 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:51.918 1+0 records in 00:35:51.918 1+0 records out 00:35:51.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765714 s, 5.3 MB/s 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:51.918 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:51.918 1+0 records in 00:35:51.918 1+0 records out 00:35:51.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729837 s, 5.6 MB/s 00:35:52.177 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.177 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:52.177 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.177 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:52.177 02:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:52.177 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:52.177 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:52.177 02:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:52.436 1+0 records in 00:35:52.436 1+0 records out 00:35:52.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000809396 s, 5.1 MB/s 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:52.436 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:52.732 1+0 records in 00:35:52.732 1+0 records out 00:35:52.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000854166 s, 4.8 MB/s 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:52.732 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:35:52.991 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:35:52.991 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:35:52.991 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:35:52.991 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:35:52.991 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:52.991 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:52.991 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:52.991 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:35:52.991 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:52.991 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:52.991 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:52.992 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:52.992 1+0 records in 00:35:52.992 1+0 records out 00:35:52.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702292 s, 5.8 MB/s 00:35:52.992 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.992 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:52.992 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.992 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:52.992 02:05:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:52.992 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:52.992 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:52.992 02:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:53.250 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:35:53.250 { 00:35:53.250 "nbd_device": "/dev/nbd0", 00:35:53.250 "bdev_name": "nvme0n1" 00:35:53.250 }, 00:35:53.250 { 00:35:53.250 "nbd_device": "/dev/nbd1", 00:35:53.250 "bdev_name": "nvme1n1" 00:35:53.250 }, 00:35:53.250 { 00:35:53.250 "nbd_device": "/dev/nbd2", 00:35:53.250 "bdev_name": "nvme2n1" 00:35:53.250 }, 00:35:53.250 { 00:35:53.250 "nbd_device": "/dev/nbd3", 00:35:53.250 "bdev_name": "nvme2n2" 00:35:53.250 }, 00:35:53.250 { 00:35:53.250 "nbd_device": "/dev/nbd4", 00:35:53.250 "bdev_name": "nvme2n3" 00:35:53.250 }, 00:35:53.250 { 00:35:53.250 "nbd_device": "/dev/nbd5", 00:35:53.250 "bdev_name": "nvme3n1" 00:35:53.250 } 00:35:53.250 ]' 00:35:53.250 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:35:53.250 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:35:53.250 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:35:53.250 { 00:35:53.250 "nbd_device": "/dev/nbd0", 00:35:53.250 "bdev_name": "nvme0n1" 00:35:53.251 }, 00:35:53.251 { 00:35:53.251 "nbd_device": "/dev/nbd1", 00:35:53.251 "bdev_name": "nvme1n1" 00:35:53.251 }, 00:35:53.251 { 00:35:53.251 "nbd_device": "/dev/nbd2", 00:35:53.251 "bdev_name": "nvme2n1" 00:35:53.251 }, 00:35:53.251 { 00:35:53.251 "nbd_device": "/dev/nbd3", 00:35:53.251 "bdev_name": "nvme2n2" 00:35:53.251 }, 00:35:53.251 { 00:35:53.251 "nbd_device": "/dev/nbd4", 00:35:53.251 "bdev_name": "nvme2n3" 00:35:53.251 }, 00:35:53.251 { 00:35:53.251 "nbd_device": "/dev/nbd5", 00:35:53.251 "bdev_name": "nvme3n1" 00:35:53.251 } 00:35:53.251 ]' 00:35:53.251 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:35:53.251 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:53.251 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:35:53.251 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:53.251 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:53.251 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:53.251 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:53.509 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:53.509 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:53.509 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:53.509 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:53.509 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:53.509 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:53.509 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:53.509 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:53.509 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:53.509 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:53.767 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:53.768 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:53.768 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:53.768 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:53.768 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:53.768 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:53.768 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:53.768 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:53.768 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:53.768 02:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:35:54.335 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:35:54.335 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:35:54.335 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:35:54.335 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:54.335 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:54.335 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:35:54.335 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:54.335 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:54.335 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:54.335 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:54.594 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:35:54.853 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:35:54.853 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:35:54.853 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:35:54.853 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:54.853 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:54.853 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:35:54.853 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:54.853 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:54.853 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:54.853 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:54.853 02:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:55.420 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:35:55.679 /dev/nbd0 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:55.679 1+0 records in 00:35:55.679 1+0 records out 00:35:55.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438421 s, 9.3 MB/s 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:55.679 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:55.680 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:55.680 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:55.680 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:35:55.938 /dev/nbd1 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:55.938 1+0 records in 00:35:55.938 1+0 records out 00:35:55.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000828569 s, 4.9 MB/s 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:55.938 02:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:35:56.196 /dev/nbd10 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:56.196 1+0 records in 00:35:56.196 1+0 records out 00:35:56.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661384 s, 6.2 MB/s 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:56.196 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:56.197 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:56.197 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:56.197 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:56.197 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:56.197 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:56.197 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:35:56.455 /dev/nbd11 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:56.455 1+0 records in 00:35:56.455 1+0 records out 00:35:56.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778341 s, 5.3 MB/s 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:56.455 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:35:57.022 /dev/nbd12 00:35:57.022 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:35:57.022 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:35:57.022 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:57.023 1+0 records in 00:35:57.023 1+0 records out 00:35:57.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771819 s, 5.3 MB/s 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:57.023 02:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:35:57.281 /dev/nbd13 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:57.281 1+0 records in 00:35:57.281 1+0 records out 00:35:57.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658767 s, 6.2 MB/s 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:57.281 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:57.540 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd0", 00:35:57.540 "bdev_name": "nvme0n1" 00:35:57.540 }, 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd1", 00:35:57.540 "bdev_name": "nvme1n1" 00:35:57.540 }, 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd10", 00:35:57.540 "bdev_name": "nvme2n1" 00:35:57.540 }, 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd11", 00:35:57.540 "bdev_name": "nvme2n2" 00:35:57.540 }, 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd12", 00:35:57.540 "bdev_name": "nvme2n3" 00:35:57.540 }, 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd13", 00:35:57.540 "bdev_name": "nvme3n1" 00:35:57.540 } 00:35:57.540 ]' 00:35:57.540 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:57.540 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd0", 00:35:57.540 "bdev_name": "nvme0n1" 00:35:57.540 }, 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd1", 00:35:57.540 "bdev_name": "nvme1n1" 00:35:57.540 }, 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd10", 00:35:57.540 "bdev_name": "nvme2n1" 00:35:57.540 }, 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd11", 00:35:57.540 "bdev_name": "nvme2n2" 00:35:57.540 }, 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd12", 00:35:57.540 "bdev_name": "nvme2n3" 00:35:57.540 }, 00:35:57.540 { 00:35:57.540 "nbd_device": "/dev/nbd13", 00:35:57.540 "bdev_name": "nvme3n1" 00:35:57.540 } 00:35:57.540 ]' 00:35:57.540 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:35:57.540 /dev/nbd1 00:35:57.540 /dev/nbd10 00:35:57.540 /dev/nbd11 00:35:57.540 /dev/nbd12 00:35:57.540 /dev/nbd13' 00:35:57.540 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:57.540 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:35:57.540 /dev/nbd1 00:35:57.540 /dev/nbd10 00:35:57.540 /dev/nbd11 00:35:57.540 /dev/nbd12 00:35:57.541 /dev/nbd13' 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:35:57.541 256+0 records in 00:35:57.541 256+0 records out 00:35:57.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107281 s, 97.7 MB/s 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:57.541 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:35:57.799 256+0 records in 00:35:57.799 256+0 records out 00:35:57.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165433 s, 6.3 MB/s 00:35:57.799 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:57.799 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:35:57.799 256+0 records in 00:35:57.799 256+0 records out 00:35:57.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.192413 s, 5.4 MB/s 00:35:57.799 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:57.799 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:35:58.058 256+0 records in 00:35:58.058 256+0 records out 00:35:58.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168632 s, 6.2 MB/s 00:35:58.058 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:58.058 02:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:35:58.317 256+0 records in 00:35:58.317 256+0 records out 00:35:58.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180166 s, 5.8 MB/s 00:35:58.317 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:58.317 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:35:58.317 256+0 records in 00:35:58.317 256+0 records out 00:35:58.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171128 s, 6.1 MB/s 00:35:58.317 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:58.317 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:35:58.576 256+0 records in 00:35:58.576 256+0 records out 00:35:58.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17203 s, 6.1 MB/s 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:58.576 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:58.835 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:58.835 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:58.835 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:58.835 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:58.835 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:58.835 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:58.835 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:58.835 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:58.835 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:58.835 02:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:59.403 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:59.403 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:59.403 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:59.403 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:59.403 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:59.403 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:59.403 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:59.403 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:59.403 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:59.403 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:35:59.662 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:35:59.662 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:35:59.662 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:35:59.662 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:59.662 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:59.662 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:35:59.662 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:59.662 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:59.662 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:59.662 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:35:59.921 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:35:59.921 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:35:59.921 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:35:59.921 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:59.921 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:59.921 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:35:59.921 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:59.921 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:59.921 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:59.921 02:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:36:00.180 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:36:00.180 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:36:00.180 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:36:00.180 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:00.180 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:00.180 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:36:00.180 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:00.180 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:00.180 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:00.180 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:36:00.440 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:36:00.440 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:36:00.440 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:36:00.440 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:00.441 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:00.441 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:36:00.441 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:00.441 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:00.441 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:36:00.441 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:00.441 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:36:00.699 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:36:00.957 malloc_lvol_verify 00:36:00.957 02:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:36:01.216 d734f965-1e2a-4810-a6c8-e27efc8b19ae 00:36:01.216 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:36:01.474 692b7dd6-9d82-478b-b7cd-1552ee301818 00:36:01.475 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:36:01.734 /dev/nbd0 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:36:01.734 mke2fs 1.47.0 (5-Feb-2023) 00:36:01.734 Discarding device blocks: 0/4096 done 00:36:01.734 Creating filesystem with 4096 1k blocks and 1024 inodes 00:36:01.734 00:36:01.734 Allocating group tables: 0/1 done 00:36:01.734 Writing inode tables: 0/1 done 00:36:01.734 Creating journal (1024 blocks): done 00:36:01.734 Writing superblocks and filesystem accounting information: 0/1 done 00:36:01.734 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:01.734 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71814 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71814 ']' 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71814 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71814 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:01.993 killing process with pid 71814 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71814' 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71814 00:36:01.993 02:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71814 00:36:02.957 02:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:36:02.957 00:36:02.957 real 0m13.038s 00:36:02.957 user 0m18.284s 00:36:02.957 sys 0m4.415s 00:36:02.957 02:05:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:02.957 02:05:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:36:02.957 ************************************ 00:36:02.957 END TEST bdev_nbd 00:36:02.957 ************************************ 00:36:03.215 02:05:11 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:36:03.215 02:05:11 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:36:03.215 02:05:11 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:36:03.215 02:05:11 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:36:03.215 02:05:11 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:03.216 02:05:11 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.216 02:05:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:03.216 ************************************ 00:36:03.216 START TEST bdev_fio 00:36:03.216 ************************************ 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:36:03.216 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:36:03.216 ************************************ 00:36:03.216 START TEST bdev_fio_rw_verify 00:36:03.216 ************************************ 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:03.216 02:05:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:03.475 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:03.475 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:03.475 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:03.475 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:03.475 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:03.475 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:03.475 fio-3.35 00:36:03.475 Starting 6 threads 00:36:15.678 00:36:15.678 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72244: Tue Oct 15 02:05:23 2024 00:36:15.678 read: IOPS=29.4k, BW=115MiB/s (120MB/s)(1148MiB/10001msec) 00:36:15.678 slat (usec): min=2, max=4030, avg= 6.93, stdev=11.25 00:36:15.678 clat (usec): min=77, max=1439.7k, avg=619.11, stdev=7512.39 00:36:15.678 lat (usec): min=86, max=1439.7k, avg=626.04, stdev=7512.45 00:36:15.678 clat percentiles (usec): 00:36:15.678 | 50.000th=[ 578], 99.000th=[ 1123], 99.900th=[ 1729], 00:36:15.678 | 99.990th=[ 5342], 99.999th=[1434452] 00:36:15.678 write: IOPS=29.7k, BW=116MiB/s (121MB/s)(1159MiB/10001msec); 0 zone resets 00:36:15.678 slat (usec): min=7, max=5301, avg=28.55, stdev=41.57 00:36:15.678 clat (usec): min=90, max=14147, avg=732.58, stdev=348.59 00:36:15.678 lat (usec): min=108, max=14170, avg=761.13, stdev=351.43 00:36:15.678 clat percentiles (usec): 00:36:15.678 | 50.000th=[ 717], 99.000th=[ 1647], 99.900th=[ 4555], 99.990th=[ 9503], 00:36:15.678 | 99.999th=[13829] 00:36:15.678 bw ( KiB/s): min=90583, max=153969, per=100.00%, avg=120624.46, stdev=3135.31, samples=112 00:36:15.678 iops : min=22645, max=38492, avg=30155.93, stdev=783.84, samples=112 00:36:15.678 lat (usec) : 100=0.01%, 250=4.19%, 500=25.35%, 750=36.37%, 1000=26.68% 00:36:15.678 lat (msec) : 2=7.09%, 4=0.21%, 10=0.10%, 20=0.01%, 2000=0.01% 00:36:15.678 cpu : usr=56.78%, sys=27.50%, ctx=6396, majf=0, minf=24920 00:36:15.678 IO depths : 1=11.1%, 2=23.2%, 4=51.5%, 8=14.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:15.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.678 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.678 issued rwts: total=293867,296591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.678 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:15.678 00:36:15.678 Run status group 0 (all jobs): 00:36:15.678 READ: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=1148MiB (1204MB), run=10001-10001msec 00:36:15.678 WRITE: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=1159MiB (1215MB), run=10001-10001msec 00:36:15.678 ----------------------------------------------------- 00:36:15.678 Suppressions used: 00:36:15.678 count bytes template 00:36:15.678 6 48 /usr/src/fio/parse.c 00:36:15.678 2503 240288 /usr/src/fio/iolog.c 00:36:15.678 1 8 libtcmalloc_minimal.so 00:36:15.678 1 904 libcrypto.so 00:36:15.678 ----------------------------------------------------- 00:36:15.678 00:36:15.678 00:36:15.678 real 0m12.246s 00:36:15.678 user 0m35.810s 00:36:15.678 sys 0m16.893s 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:15.678 ************************************ 00:36:15.678 END TEST bdev_fio_rw_verify 00:36:15.678 ************************************ 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:36:15.678 02:05:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:36:15.679 02:05:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e5cedbd8-539c-44a1-bec1-e3116d48c9ce"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e5cedbd8-539c-44a1-bec1-e3116d48c9ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "737ca16a-ce9e-42dc-9f98-0a133713593d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "737ca16a-ce9e-42dc-9f98-0a133713593d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "90344a29-d16e-4023-ac1d-5771ea4bebc9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "90344a29-d16e-4023-ac1d-5771ea4bebc9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "349feba4-b17d-4c70-9c07-64100b0a55e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "349feba4-b17d-4c70-9c07-64100b0a55e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "23853429-c2a3-4c32-8c4b-1d97f1588276"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23853429-c2a3-4c32-8c4b-1d97f1588276",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "98382da9-bd9e-422f-bee7-1ed335c4195b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "98382da9-bd9e-422f-bee7-1ed335c4195b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:36:15.679 02:05:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:36:15.679 02:05:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:15.679 /home/vagrant/spdk_repo/spdk 00:36:15.679 02:05:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:36:15.679 02:05:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:36:15.679 02:05:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:36:15.679 00:36:15.679 real 0m12.430s 00:36:15.679 user 0m35.917s 00:36:15.679 sys 0m16.968s 00:36:15.679 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:15.679 ************************************ 00:36:15.679 END TEST bdev_fio 00:36:15.679 02:05:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:36:15.679 ************************************ 00:36:15.679 02:05:24 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:15.679 02:05:24 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:15.679 02:05:24 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:36:15.679 02:05:24 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:15.679 02:05:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:15.679 ************************************ 00:36:15.679 START TEST bdev_verify 00:36:15.679 ************************************ 00:36:15.679 02:05:24 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:15.679 [2024-10-15 02:05:24.591571] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:36:15.679 [2024-10-15 02:05:24.591739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72413 ] 00:36:15.937 [2024-10-15 02:05:24.765778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:16.196 [2024-10-15 02:05:24.961797] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.196 [2024-10-15 02:05:24.961829] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.455 Running I/O for 5 seconds... 00:36:18.767 21646.00 IOPS, 84.55 MiB/s [2024-10-15T02:05:28.712Z] 22610.50 IOPS, 88.32 MiB/s [2024-10-15T02:05:29.647Z] 22635.00 IOPS, 88.42 MiB/s [2024-10-15T02:05:30.582Z] 22304.25 IOPS, 87.13 MiB/s [2024-10-15T02:05:30.582Z] 22720.20 IOPS, 88.75 MiB/s 00:36:21.570 Latency(us) 00:36:21.570 [2024-10-15T02:05:30.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.570 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0x0 length 0xa0000 00:36:21.570 nvme0n1 : 5.06 1746.49 6.82 0.00 0.00 73147.52 14656.23 67204.19 00:36:21.570 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0xa0000 length 0xa0000 00:36:21.570 nvme0n1 : 5.04 1524.64 5.96 0.00 0.00 83787.71 14358.34 73400.32 00:36:21.570 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0x0 length 0xbd0bd 00:36:21.570 nvme1n1 : 5.04 3149.04 12.30 0.00 0.00 40407.54 5719.51 61484.68 00:36:21.570 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:36:21.570 nvme1n1 : 5.05 2824.08 11.03 0.00 0.00 45024.86 4944.99 64821.06 00:36:21.570 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0x0 length 0x80000 00:36:21.570 nvme2n1 : 5.07 1768.76 6.91 0.00 0.00 71839.11 7000.44 75783.45 00:36:21.570 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0x80000 length 0x80000 00:36:21.570 nvme2n1 : 5.07 1540.09 6.02 0.00 0.00 82400.19 9532.51 67204.19 00:36:21.570 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0x0 length 0x80000 00:36:21.570 nvme2n2 : 5.06 1745.72 6.82 0.00 0.00 72673.70 16801.05 65297.69 00:36:21.570 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0x80000 length 0x80000 00:36:21.570 nvme2n2 : 5.06 1542.08 6.02 0.00 0.00 82107.67 7357.91 82456.20 00:36:21.570 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0x0 length 0x80000 00:36:21.570 nvme2n3 : 5.07 1767.87 6.91 0.00 0.00 71623.87 4438.57 74830.20 00:36:21.570 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0x80000 length 0x80000 00:36:21.570 nvme2n3 : 5.07 1539.56 6.01 0.00 0.00 82060.24 9830.40 78166.57 00:36:21.570 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0x0 length 0x20000 00:36:21.570 nvme3n1 : 5.07 1767.49 6.90 0.00 0.00 71485.26 4885.41 69587.32 00:36:21.570 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:21.570 Verification LBA range: start 0x20000 length 0x20000 00:36:21.570 nvme3n1 : 5.08 1562.39 6.10 0.00 0.00 80741.18 3872.58 87222.46 00:36:21.570 [2024-10-15T02:05:30.582Z] =================================================================================================================== 00:36:21.570 [2024-10-15T02:05:30.582Z] Total : 22478.20 87.81 0.00 0.00 67768.76 3872.58 87222.46 00:36:22.946 00:36:22.946 real 0m7.109s 00:36:22.946 user 0m11.012s 00:36:22.946 sys 0m1.796s 00:36:22.946 02:05:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:22.946 02:05:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:36:22.946 ************************************ 00:36:22.946 END TEST bdev_verify 00:36:22.947 ************************************ 00:36:22.947 02:05:31 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:22.947 02:05:31 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:36:22.947 02:05:31 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:22.947 02:05:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:22.947 ************************************ 00:36:22.947 START TEST bdev_verify_big_io 00:36:22.947 ************************************ 00:36:22.947 02:05:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:22.947 [2024-10-15 02:05:31.728459] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:36:22.947 [2024-10-15 02:05:31.728641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72515 ] 00:36:22.947 [2024-10-15 02:05:31.887341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:23.205 [2024-10-15 02:05:32.083211] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.205 [2024-10-15 02:05:32.083225] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.773 Running I/O for 5 seconds... 00:36:28.435 1952.00 IOPS, 122.00 MiB/s [2024-10-15T02:05:38.867Z] 3116.00 IOPS, 194.75 MiB/s [2024-10-15T02:05:38.867Z] 3295.00 IOPS, 205.94 MiB/s 00:36:29.855 Latency(us) 00:36:29.855 [2024-10-15T02:05:38.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.855 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0x0 length 0xa000 00:36:29.855 nvme0n1 : 5.80 140.79 8.80 0.00 0.00 895451.60 29193.31 1845493.76 00:36:29.855 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0xa000 length 0xa000 00:36:29.855 nvme0n1 : 5.85 152.68 9.54 0.00 0.00 823818.23 6404.65 831234.79 00:36:29.855 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0x0 length 0xbd0b 00:36:29.855 nvme1n1 : 5.83 104.30 6.52 0.00 0.00 1180690.08 10545.34 3172419.03 00:36:29.855 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0xbd0b length 0xbd0b 00:36:29.855 nvme1n1 : 5.82 151.30 9.46 0.00 0.00 812309.38 24069.59 1220161.16 00:36:29.855 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0x0 length 0x8000 00:36:29.855 nvme2n1 : 5.81 166.75 10.42 0.00 0.00 720933.42 32172.22 636771.61 00:36:29.855 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0x8000 length 0x8000 00:36:29.855 nvme2n1 : 5.83 150.93 9.43 0.00 0.00 793570.28 35031.97 1311673.25 00:36:29.855 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0x0 length 0x8000 00:36:29.855 nvme2n2 : 5.81 193.99 12.12 0.00 0.00 604367.31 30980.65 743535.71 00:36:29.855 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0x8000 length 0x8000 00:36:29.855 nvme2n2 : 5.84 139.78 8.74 0.00 0.00 833024.77 15847.80 1403185.34 00:36:29.855 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0x0 length 0x8000 00:36:29.855 nvme2n3 : 5.81 129.37 8.09 0.00 0.00 882824.74 28001.75 2104778.01 00:36:29.855 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0x8000 length 0x8000 00:36:29.855 nvme2n3 : 5.83 140.07 8.75 0.00 0.00 810522.91 19779.96 1479445.41 00:36:29.855 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0x0 length 0x2000 00:36:29.855 nvme3n1 : 5.82 195.28 12.20 0.00 0.00 569518.54 6315.29 762600.73 00:36:29.855 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:29.855 Verification LBA range: start 0x2000 length 0x2000 00:36:29.855 nvme3n1 : 5.84 167.26 10.45 0.00 0.00 660571.70 7000.44 1006632.96 00:36:29.855 [2024-10-15T02:05:38.867Z] =================================================================================================================== 00:36:29.855 [2024-10-15T02:05:38.867Z] Total : 1832.50 114.53 0.00 0.00 775505.02 6315.29 3172419.03 00:36:31.232 00:36:31.232 real 0m8.168s 00:36:31.232 user 0m14.568s 00:36:31.232 sys 0m0.625s 00:36:31.232 02:05:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:31.232 02:05:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:36:31.232 ************************************ 00:36:31.232 END TEST bdev_verify_big_io 00:36:31.232 ************************************ 00:36:31.232 02:05:39 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:31.232 02:05:39 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:36:31.232 02:05:39 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:31.232 02:05:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:31.232 ************************************ 00:36:31.232 START TEST bdev_write_zeroes 00:36:31.232 ************************************ 00:36:31.232 02:05:39 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:31.232 [2024-10-15 02:05:39.976468] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:36:31.232 [2024-10-15 02:05:39.976640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72626 ] 00:36:31.232 [2024-10-15 02:05:40.150389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.492 [2024-10-15 02:05:40.360662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.059 Running I/O for 1 seconds... 00:36:32.996 82272.00 IOPS, 321.38 MiB/s 00:36:32.996 Latency(us) 00:36:32.996 [2024-10-15T02:05:42.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.996 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:32.996 nvme0n1 : 1.03 12343.30 48.22 0.00 0.00 10358.05 6404.65 22282.24 00:36:32.996 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:32.996 nvme1n1 : 1.02 19957.39 77.96 0.00 0.00 6399.17 4051.32 14596.65 00:36:32.996 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:32.996 nvme2n1 : 1.03 12330.05 48.16 0.00 0.00 10297.34 6196.13 16801.05 00:36:32.996 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:32.996 nvme2n2 : 1.03 12317.98 48.12 0.00 0.00 10300.57 6285.50 18350.08 00:36:32.996 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:32.996 nvme2n3 : 1.03 12305.06 48.07 0.00 0.00 10301.78 6523.81 19779.96 00:36:32.996 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:32.996 nvme3n1 : 1.03 12293.02 48.02 0.00 0.00 10302.85 6494.02 21090.68 00:36:32.996 [2024-10-15T02:05:42.008Z] =================================================================================================================== 00:36:32.996 [2024-10-15T02:05:42.008Z] Total : 81546.80 318.54 0.00 0.00 9357.41 4051.32 22282.24 00:36:33.932 00:36:33.932 real 0m2.958s 00:36:33.932 user 0m2.104s 00:36:33.932 sys 0m0.671s 00:36:33.932 02:05:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:33.932 02:05:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:36:33.932 ************************************ 00:36:33.932 END TEST bdev_write_zeroes 00:36:33.932 ************************************ 00:36:33.932 02:05:42 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:33.932 02:05:42 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:36:33.932 02:05:42 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:33.932 02:05:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:33.932 ************************************ 00:36:33.932 START TEST bdev_json_nonenclosed 00:36:33.932 ************************************ 00:36:33.932 02:05:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:34.192 [2024-10-15 02:05:42.989693] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:36:34.192 [2024-10-15 02:05:42.989877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72681 ] 00:36:34.192 [2024-10-15 02:05:43.164670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.451 [2024-10-15 02:05:43.346656] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.451 [2024-10-15 02:05:43.346795] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:36:34.451 [2024-10-15 02:05:43.346821] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:34.451 [2024-10-15 02:05:43.346834] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:34.711 00:36:34.711 real 0m0.810s 00:36:34.711 user 0m0.546s 00:36:34.711 sys 0m0.156s 00:36:34.711 02:05:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:34.711 02:05:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:36:34.711 ************************************ 00:36:34.711 END TEST bdev_json_nonenclosed 00:36:34.711 ************************************ 00:36:34.970 02:05:43 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:34.970 02:05:43 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:36:34.970 02:05:43 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:34.970 02:05:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:34.970 ************************************ 00:36:34.970 START TEST bdev_json_nonarray 00:36:34.970 ************************************ 00:36:34.970 02:05:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:34.970 [2024-10-15 02:05:43.851274] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:36:34.970 [2024-10-15 02:05:43.851464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72712 ] 00:36:35.229 [2024-10-15 02:05:44.018493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.229 [2024-10-15 02:05:44.213397] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.229 [2024-10-15 02:05:44.213563] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:36:35.229 [2024-10-15 02:05:44.213592] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:35.229 [2024-10-15 02:05:44.213606] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:35.796 00:36:35.796 real 0m0.821s 00:36:35.796 user 0m0.557s 00:36:35.796 sys 0m0.158s 00:36:35.796 02:05:44 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:35.796 02:05:44 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:36:35.796 ************************************ 00:36:35.796 END TEST bdev_json_nonarray 00:36:35.796 ************************************ 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:36:35.796 02:05:44 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:36.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:38.265 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:38.265 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:38.265 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:36:38.265 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:36:38.265 00:36:38.265 real 1m2.394s 00:36:38.265 user 1m41.270s 00:36:38.265 sys 0m29.423s 00:36:38.265 02:05:47 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:38.265 ************************************ 00:36:38.265 END TEST blockdev_xnvme 00:36:38.265 02:05:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:38.265 ************************************ 00:36:38.265 02:05:47 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:36:38.265 02:05:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:38.265 02:05:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:38.265 02:05:47 -- common/autotest_common.sh@10 -- # set +x 00:36:38.265 ************************************ 00:36:38.265 START TEST ublk 00:36:38.265 ************************************ 00:36:38.265 02:05:47 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:36:38.265 * Looking for test storage... 00:36:38.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:36:38.265 02:05:47 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:38.265 02:05:47 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:36:38.265 02:05:47 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:38.524 02:05:47 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:38.524 02:05:47 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:38.524 02:05:47 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:38.524 02:05:47 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:38.524 02:05:47 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:36:38.524 02:05:47 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:36:38.524 02:05:47 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:36:38.524 02:05:47 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:36:38.524 02:05:47 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:36:38.524 02:05:47 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:36:38.524 02:05:47 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:36:38.524 02:05:47 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:38.524 02:05:47 ublk -- scripts/common.sh@344 -- # case "$op" in 00:36:38.524 02:05:47 ublk -- scripts/common.sh@345 -- # : 1 00:36:38.524 02:05:47 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:38.524 02:05:47 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:38.524 02:05:47 ublk -- scripts/common.sh@365 -- # decimal 1 00:36:38.524 02:05:47 ublk -- scripts/common.sh@353 -- # local d=1 00:36:38.524 02:05:47 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:38.524 02:05:47 ublk -- scripts/common.sh@355 -- # echo 1 00:36:38.524 02:05:47 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:36:38.524 02:05:47 ublk -- scripts/common.sh@366 -- # decimal 2 00:36:38.524 02:05:47 ublk -- scripts/common.sh@353 -- # local d=2 00:36:38.524 02:05:47 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:38.524 02:05:47 ublk -- scripts/common.sh@355 -- # echo 2 00:36:38.524 02:05:47 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:36:38.524 02:05:47 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:38.524 02:05:47 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:38.524 02:05:47 ublk -- scripts/common.sh@368 -- # return 0 00:36:38.524 02:05:47 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:38.524 02:05:47 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:38.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.524 --rc genhtml_branch_coverage=1 00:36:38.524 --rc genhtml_function_coverage=1 00:36:38.524 --rc genhtml_legend=1 00:36:38.524 --rc geninfo_all_blocks=1 00:36:38.524 --rc geninfo_unexecuted_blocks=1 00:36:38.524 00:36:38.524 ' 00:36:38.524 02:05:47 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:38.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.524 --rc genhtml_branch_coverage=1 00:36:38.524 --rc genhtml_function_coverage=1 00:36:38.524 --rc genhtml_legend=1 00:36:38.524 --rc geninfo_all_blocks=1 00:36:38.524 --rc geninfo_unexecuted_blocks=1 00:36:38.524 00:36:38.524 ' 00:36:38.524 02:05:47 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:38.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.524 --rc genhtml_branch_coverage=1 00:36:38.524 --rc genhtml_function_coverage=1 00:36:38.524 --rc genhtml_legend=1 00:36:38.524 --rc geninfo_all_blocks=1 00:36:38.524 --rc geninfo_unexecuted_blocks=1 00:36:38.524 00:36:38.524 ' 00:36:38.524 02:05:47 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:38.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.524 --rc genhtml_branch_coverage=1 00:36:38.524 --rc genhtml_function_coverage=1 00:36:38.524 --rc genhtml_legend=1 00:36:38.524 --rc geninfo_all_blocks=1 00:36:38.524 --rc geninfo_unexecuted_blocks=1 00:36:38.524 00:36:38.524 ' 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:36:38.524 02:05:47 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:36:38.524 02:05:47 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:36:38.524 02:05:47 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:36:38.524 02:05:47 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:36:38.524 02:05:47 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:36:38.524 02:05:47 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:36:38.524 02:05:47 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:36:38.524 02:05:47 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:36:38.524 02:05:47 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:36:38.524 02:05:47 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:38.524 02:05:47 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:38.524 02:05:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:36:38.524 ************************************ 00:36:38.524 START TEST test_save_ublk_config 00:36:38.524 ************************************ 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73007 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73007 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 73007 ']' 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:38.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:38.525 02:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:36:38.525 [2024-10-15 02:05:47.514133] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:36:38.525 [2024-10-15 02:05:47.514312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73007 ] 00:36:38.783 [2024-10-15 02:05:47.692947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.041 [2024-10-15 02:05:47.979660] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.976 02:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:39.976 02:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:36:39.976 02:05:48 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:36:39.976 02:05:48 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:36:39.976 02:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.976 02:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:36:39.976 [2024-10-15 02:05:48.732506] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:36:39.976 [2024-10-15 02:05:48.733639] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:36:39.976 malloc0 00:36:39.977 [2024-10-15 02:05:48.803655] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:36:39.977 [2024-10-15 02:05:48.803763] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:36:39.977 [2024-10-15 02:05:48.803794] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:36:39.977 [2024-10-15 02:05:48.803803] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:36:39.977 [2024-10-15 02:05:48.811603] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:36:39.977 [2024-10-15 02:05:48.811633] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:36:39.977 [2024-10-15 02:05:48.819485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:36:39.977 [2024-10-15 02:05:48.819609] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:36:39.977 [2024-10-15 02:05:48.843559] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:36:39.977 0 00:36:39.977 02:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.977 02:05:48 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:36:39.977 02:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.977 02:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:36:40.236 02:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.236 02:05:49 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:36:40.236 "subsystems": [ 00:36:40.236 { 00:36:40.236 "subsystem": "fsdev", 00:36:40.236 "config": [ 00:36:40.236 { 00:36:40.236 "method": "fsdev_set_opts", 00:36:40.236 "params": { 00:36:40.236 "fsdev_io_pool_size": 65535, 00:36:40.236 "fsdev_io_cache_size": 256 00:36:40.236 } 00:36:40.236 } 00:36:40.236 ] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "keyring", 00:36:40.236 "config": [] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "iobuf", 00:36:40.236 "config": [ 00:36:40.236 { 00:36:40.236 "method": "iobuf_set_options", 00:36:40.236 "params": { 00:36:40.236 "small_pool_count": 8192, 00:36:40.236 "large_pool_count": 1024, 00:36:40.236 "small_bufsize": 8192, 00:36:40.236 "large_bufsize": 135168 00:36:40.236 } 00:36:40.236 } 00:36:40.236 ] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "sock", 00:36:40.236 "config": [ 00:36:40.236 { 00:36:40.236 "method": "sock_set_default_impl", 00:36:40.236 "params": { 00:36:40.236 "impl_name": "posix" 00:36:40.236 } 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "method": "sock_impl_set_options", 00:36:40.236 "params": { 00:36:40.236 "impl_name": "ssl", 00:36:40.236 "recv_buf_size": 4096, 00:36:40.236 "send_buf_size": 4096, 00:36:40.236 "enable_recv_pipe": true, 00:36:40.236 "enable_quickack": false, 00:36:40.236 "enable_placement_id": 0, 00:36:40.236 "enable_zerocopy_send_server": true, 00:36:40.236 "enable_zerocopy_send_client": false, 00:36:40.236 "zerocopy_threshold": 0, 00:36:40.236 "tls_version": 0, 00:36:40.236 "enable_ktls": false 00:36:40.236 } 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "method": "sock_impl_set_options", 00:36:40.236 "params": { 00:36:40.236 "impl_name": "posix", 00:36:40.236 "recv_buf_size": 2097152, 00:36:40.236 "send_buf_size": 2097152, 00:36:40.236 "enable_recv_pipe": true, 00:36:40.236 "enable_quickack": false, 00:36:40.236 "enable_placement_id": 0, 00:36:40.236 "enable_zerocopy_send_server": true, 00:36:40.236 "enable_zerocopy_send_client": false, 00:36:40.236 "zerocopy_threshold": 0, 00:36:40.236 "tls_version": 0, 00:36:40.236 "enable_ktls": false 00:36:40.236 } 00:36:40.236 } 00:36:40.236 ] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "vmd", 00:36:40.236 "config": [] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "accel", 00:36:40.236 "config": [ 00:36:40.236 { 00:36:40.236 "method": "accel_set_options", 00:36:40.236 "params": { 00:36:40.236 "small_cache_size": 128, 00:36:40.236 "large_cache_size": 16, 00:36:40.236 "task_count": 2048, 00:36:40.236 "sequence_count": 2048, 00:36:40.236 "buf_count": 2048 00:36:40.236 } 00:36:40.236 } 00:36:40.236 ] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "bdev", 00:36:40.236 "config": [ 00:36:40.236 { 00:36:40.236 "method": "bdev_set_options", 00:36:40.236 "params": { 00:36:40.236 "bdev_io_pool_size": 65535, 00:36:40.236 "bdev_io_cache_size": 256, 00:36:40.236 "bdev_auto_examine": true, 00:36:40.236 "iobuf_small_cache_size": 128, 00:36:40.236 "iobuf_large_cache_size": 16 00:36:40.236 } 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "method": "bdev_raid_set_options", 00:36:40.236 "params": { 00:36:40.236 "process_window_size_kb": 1024, 00:36:40.236 "process_max_bandwidth_mb_sec": 0 00:36:40.236 } 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "method": "bdev_iscsi_set_options", 00:36:40.236 "params": { 00:36:40.236 "timeout_sec": 30 00:36:40.236 } 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "method": "bdev_nvme_set_options", 00:36:40.236 "params": { 00:36:40.236 "action_on_timeout": "none", 00:36:40.236 "timeout_us": 0, 00:36:40.236 "timeout_admin_us": 0, 00:36:40.236 "keep_alive_timeout_ms": 10000, 00:36:40.236 "arbitration_burst": 0, 00:36:40.236 "low_priority_weight": 0, 00:36:40.236 "medium_priority_weight": 0, 00:36:40.236 "high_priority_weight": 0, 00:36:40.236 "nvme_adminq_poll_period_us": 10000, 00:36:40.236 "nvme_ioq_poll_period_us": 0, 00:36:40.236 "io_queue_requests": 0, 00:36:40.236 "delay_cmd_submit": true, 00:36:40.236 "transport_retry_count": 4, 00:36:40.236 "bdev_retry_count": 3, 00:36:40.236 "transport_ack_timeout": 0, 00:36:40.236 "ctrlr_loss_timeout_sec": 0, 00:36:40.236 "reconnect_delay_sec": 0, 00:36:40.236 "fast_io_fail_timeout_sec": 0, 00:36:40.236 "disable_auto_failback": false, 00:36:40.236 "generate_uuids": false, 00:36:40.236 "transport_tos": 0, 00:36:40.236 "nvme_error_stat": false, 00:36:40.236 "rdma_srq_size": 0, 00:36:40.236 "io_path_stat": false, 00:36:40.236 "allow_accel_sequence": false, 00:36:40.236 "rdma_max_cq_size": 0, 00:36:40.236 "rdma_cm_event_timeout_ms": 0, 00:36:40.236 "dhchap_digests": [ 00:36:40.236 "sha256", 00:36:40.236 "sha384", 00:36:40.236 "sha512" 00:36:40.236 ], 00:36:40.236 "dhchap_dhgroups": [ 00:36:40.236 "null", 00:36:40.236 "ffdhe2048", 00:36:40.236 "ffdhe3072", 00:36:40.236 "ffdhe4096", 00:36:40.236 "ffdhe6144", 00:36:40.236 "ffdhe8192" 00:36:40.236 ] 00:36:40.236 } 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "method": "bdev_nvme_set_hotplug", 00:36:40.236 "params": { 00:36:40.236 "period_us": 100000, 00:36:40.236 "enable": false 00:36:40.236 } 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "method": "bdev_malloc_create", 00:36:40.236 "params": { 00:36:40.236 "name": "malloc0", 00:36:40.236 "num_blocks": 8192, 00:36:40.236 "block_size": 4096, 00:36:40.236 "physical_block_size": 4096, 00:36:40.236 "uuid": "4fff8a40-667f-4516-b145-cb7594411378", 00:36:40.236 "optimal_io_boundary": 0, 00:36:40.236 "md_size": 0, 00:36:40.236 "dif_type": 0, 00:36:40.236 "dif_is_head_of_md": false, 00:36:40.236 "dif_pi_format": 0 00:36:40.236 } 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "method": "bdev_wait_for_examine" 00:36:40.236 } 00:36:40.236 ] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "scsi", 00:36:40.236 "config": null 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "scheduler", 00:36:40.236 "config": [ 00:36:40.236 { 00:36:40.236 "method": "framework_set_scheduler", 00:36:40.236 "params": { 00:36:40.236 "name": "static" 00:36:40.236 } 00:36:40.236 } 00:36:40.236 ] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "vhost_scsi", 00:36:40.236 "config": [] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "vhost_blk", 00:36:40.236 "config": [] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "ublk", 00:36:40.236 "config": [ 00:36:40.236 { 00:36:40.236 "method": "ublk_create_target", 00:36:40.236 "params": { 00:36:40.236 "cpumask": "1" 00:36:40.236 } 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "method": "ublk_start_disk", 00:36:40.236 "params": { 00:36:40.236 "bdev_name": "malloc0", 00:36:40.236 "ublk_id": 0, 00:36:40.236 "num_queues": 1, 00:36:40.236 "queue_depth": 128 00:36:40.236 } 00:36:40.236 } 00:36:40.236 ] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "nbd", 00:36:40.236 "config": [] 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "subsystem": "nvmf", 00:36:40.236 "config": [ 00:36:40.236 { 00:36:40.236 "method": "nvmf_set_config", 00:36:40.236 "params": { 00:36:40.236 "discovery_filter": "match_any", 00:36:40.236 "admin_cmd_passthru": { 00:36:40.236 "identify_ctrlr": false 00:36:40.236 }, 00:36:40.236 "dhchap_digests": [ 00:36:40.236 "sha256", 00:36:40.236 "sha384", 00:36:40.236 "sha512" 00:36:40.236 ], 00:36:40.236 "dhchap_dhgroups": [ 00:36:40.236 "null", 00:36:40.236 "ffdhe2048", 00:36:40.236 "ffdhe3072", 00:36:40.236 "ffdhe4096", 00:36:40.236 "ffdhe6144", 00:36:40.236 "ffdhe8192" 00:36:40.236 ] 00:36:40.236 } 00:36:40.236 }, 00:36:40.236 { 00:36:40.236 "method": "nvmf_set_max_subsystems", 00:36:40.236 "params": { 00:36:40.236 "max_subsystems": 1024 00:36:40.236 } 00:36:40.236 }, 00:36:40.237 { 00:36:40.237 "method": "nvmf_set_crdt", 00:36:40.237 "params": { 00:36:40.237 "crdt1": 0, 00:36:40.237 "crdt2": 0, 00:36:40.237 "crdt3": 0 00:36:40.237 } 00:36:40.237 } 00:36:40.237 ] 00:36:40.237 }, 00:36:40.237 { 00:36:40.237 "subsystem": "iscsi", 00:36:40.237 "config": [ 00:36:40.237 { 00:36:40.237 "method": "iscsi_set_options", 00:36:40.237 "params": { 00:36:40.237 "node_base": "iqn.2016-06.io.spdk", 00:36:40.237 "max_sessions": 128, 00:36:40.237 "max_connections_per_session": 2, 00:36:40.237 "max_queue_depth": 64, 00:36:40.237 "default_time2wait": 2, 00:36:40.237 "default_time2retain": 20, 00:36:40.237 "first_burst_length": 8192, 00:36:40.237 "immediate_data": true, 00:36:40.237 "allow_duplicated_isid": false, 00:36:40.237 "error_recovery_level": 0, 00:36:40.237 "nop_timeout": 60, 00:36:40.237 "nop_in_interval": 30, 00:36:40.237 "disable_chap": false, 00:36:40.237 "require_chap": false, 00:36:40.237 "mutual_chap": false, 00:36:40.237 "chap_group": 0, 00:36:40.237 "max_large_datain_per_connection": 64, 00:36:40.237 "max_r2t_per_connection": 4, 00:36:40.237 "pdu_pool_size": 36864, 00:36:40.237 "immediate_data_pool_size": 16384, 00:36:40.237 "data_out_pool_size": 2048 00:36:40.237 } 00:36:40.237 } 00:36:40.237 ] 00:36:40.237 } 00:36:40.237 ] 00:36:40.237 }' 00:36:40.237 02:05:49 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73007 00:36:40.237 02:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 73007 ']' 00:36:40.237 02:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 73007 00:36:40.237 02:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:36:40.237 02:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:40.237 02:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73007 00:36:40.237 killing process with pid 73007 00:36:40.237 02:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:40.237 02:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:40.237 02:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73007' 00:36:40.237 02:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 73007 00:36:40.237 02:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 73007 00:36:41.613 [2024-10-15 02:05:50.325877] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:36:41.613 [2024-10-15 02:05:50.359535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:36:41.613 [2024-10-15 02:05:50.359671] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:36:41.613 [2024-10-15 02:05:50.367549] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:36:41.613 [2024-10-15 02:05:50.367612] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:36:41.613 [2024-10-15 02:05:50.367630] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:36:41.613 [2024-10-15 02:05:50.367663] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:36:41.613 [2024-10-15 02:05:50.367851] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:36:43.534 02:05:52 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73074 00:36:43.534 02:05:52 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73074 00:36:43.534 02:05:52 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:36:43.534 02:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 73074 ']' 00:36:43.534 02:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:43.534 02:05:52 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:36:43.534 "subsystems": [ 00:36:43.534 { 00:36:43.534 "subsystem": "fsdev", 00:36:43.534 "config": [ 00:36:43.534 { 00:36:43.534 "method": "fsdev_set_opts", 00:36:43.534 "params": { 00:36:43.534 "fsdev_io_pool_size": 65535, 00:36:43.534 "fsdev_io_cache_size": 256 00:36:43.534 } 00:36:43.534 } 00:36:43.534 ] 00:36:43.534 }, 00:36:43.534 { 00:36:43.535 "subsystem": "keyring", 00:36:43.535 "config": [] 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "subsystem": "iobuf", 00:36:43.535 "config": [ 00:36:43.535 { 00:36:43.535 "method": "iobuf_set_options", 00:36:43.535 "params": { 00:36:43.535 "small_pool_count": 8192, 00:36:43.535 "large_pool_count": 1024, 00:36:43.535 "small_bufsize": 8192, 00:36:43.535 "large_bufsize": 135168 00:36:43.535 } 00:36:43.535 } 00:36:43.535 ] 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "subsystem": "sock", 00:36:43.535 "config": [ 00:36:43.535 { 00:36:43.535 "method": "sock_set_default_impl", 00:36:43.535 "params": { 00:36:43.535 "impl_name": "posix" 00:36:43.535 } 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "method": "sock_impl_set_options", 00:36:43.535 "params": { 00:36:43.535 "impl_name": "ssl", 00:36:43.535 "recv_buf_size": 4096, 00:36:43.535 "send_buf_size": 4096, 00:36:43.535 "enable_recv_pipe": true, 00:36:43.535 "enable_quickack": false, 00:36:43.535 "enable_placement_id": 0, 00:36:43.535 "enable_zerocopy_send_server": true, 00:36:43.535 "enable_zerocopy_send_client": false, 00:36:43.535 "zerocopy_threshold": 0, 00:36:43.535 "tls_version": 0, 00:36:43.535 "enable_ktls": false 00:36:43.535 } 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "method": "sock_impl_set_options", 00:36:43.535 "params": { 00:36:43.535 "impl_name": "posix", 00:36:43.535 "recv_buf_size": 2097152, 00:36:43.535 "send_buf_size": 2097152, 00:36:43.535 "enable_recv_pipe": true, 00:36:43.535 "enable_quickack": false, 00:36:43.535 "enable_placement_id": 0, 00:36:43.535 "enable_zerocopy_send_server": true, 00:36:43.535 "enable_zerocopy_send_client": false, 00:36:43.535 "zerocopy_threshold": 0, 00:36:43.535 "tls_version": 0, 00:36:43.535 "enable_ktls": false 00:36:43.535 } 00:36:43.535 } 00:36:43.535 ] 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "subsystem": "vmd", 00:36:43.535 "config": [] 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "subsystem": "accel", 00:36:43.535 "config": [ 00:36:43.535 { 00:36:43.535 "method": "accel_set_options", 00:36:43.535 "params": { 00:36:43.535 "small_cache_size": 128, 00:36:43.535 "large_cache_size": 16, 00:36:43.535 "task_count": 2048, 00:36:43.535 "sequence_count": 2048, 00:36:43.535 "buf_count": 2048 00:36:43.535 } 00:36:43.535 } 00:36:43.535 ] 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "subsystem": "bdev", 00:36:43.535 "config": [ 00:36:43.535 { 00:36:43.535 "method": "bdev_set_options", 00:36:43.535 "params": { 00:36:43.535 "bdev_io_pool_size": 65535, 00:36:43.535 "bdev_io_cache_size": 256, 00:36:43.535 "bdev_auto_examine": true, 00:36:43.535 "iobuf_small_cache_size": 128, 00:36:43.535 "iobuf_large_cache_size": 16 00:36:43.535 } 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "method": "bdev_raid_set_options", 00:36:43.535 "params": { 00:36:43.535 "process_window_size_kb": 1024, 00:36:43.535 "process_max_bandwidth_mb_sec": 0 00:36:43.535 } 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "method": "bdev_iscsi_set_options", 00:36:43.535 "params": { 00:36:43.535 "timeout_sec": 30 00:36:43.535 } 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "method": "bdev_nvme_set_options", 00:36:43.535 "params": { 00:36:43.535 "action_on_timeout": "none", 00:36:43.535 "timeout_us": 0, 00:36:43.535 "timeout_admin_us": 0, 00:36:43.535 "keep_alive_timeout_ms": 10000, 00:36:43.535 "arbitration_burst": 0, 00:36:43.535 "low_priority_weight": 0, 00:36:43.535 "medium_priority_weight": 0, 00:36:43.535 "high_priority_weight": 0, 00:36:43.535 "nvme_adminq_poll_period_us": 10000, 00:36:43.535 "nvme_ioq_poll_period_us": 0, 00:36:43.535 "io_queue_requests": 0, 00:36:43.535 "delay_cmd_submit": true, 00:36:43.535 "transport_retry_count": 4, 00:36:43.535 "bdev_retry_count": 3, 00:36:43.535 "transport_ack_timeout": 0, 00:36:43.535 "ctrlr_loss_timeout_sec": 0, 00:36:43.535 "reconnect_delay_sec": 0, 00:36:43.535 "fast_io_fail_timeout_sec": 0, 00:36:43.535 "disable_auto_failback": false, 00:36:43.535 "generate_uuids": false, 00:36:43.535 "transport_tos": 0, 00:36:43.535 "nvme_error_stat": false, 00:36:43.535 "rdma_srq_size": 0, 00:36:43.535 "io_path_stat": false, 00:36:43.535 "allow_accel_sequence": false, 00:36:43.535 "rdma_max_cq_size": 0, 00:36:43.535 "rdma_cm_event_timeout_ms": 0, 00:36:43.535 "dhchap_digests": [ 00:36:43.535 "sha256", 00:36:43.535 "sha384", 00:36:43.535 "sha512" 00:36:43.535 ], 00:36:43.535 "dhchap_dhgroups": [ 00:36:43.535 "null", 00:36:43.535 "ffdhe2048", 00:36:43.535 "ffdhe3072", 00:36:43.535 "ffdhe4096", 00:36:43.535 "ffdhe6144", 00:36:43.535 "ffdhe8192" 00:36:43.535 ] 00:36:43.535 } 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "method": "bdev_nvme_set_hotplug", 00:36:43.535 "params": { 00:36:43.535 "period_us": 100000, 00:36:43.535 "enable": false 00:36:43.535 } 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "method": "bdev_malloc_create", 00:36:43.535 "params": { 00:36:43.535 "name": "malloc0", 00:36:43.535 "num_blocks": 8192, 00:36:43.535 "block_size": 4096, 00:36:43.535 "physical_block_size": 4096, 00:36:43.535 "uuid": "4fff8a40-667f-4516-b145-cb7594411378", 00:36:43.535 "optimal_io_boundary": 0, 00:36:43.535 "md_size": 0, 00:36:43.535 "dif_type": 0, 00:36:43.535 "dif_is_head_of_md": false, 00:36:43.535 "dif_pi_format": 0 00:36:43.535 } 00:36:43.535 }, 00:36:43.535 { 00:36:43.535 "method": "bdev_wait_for_examine" 00:36:43.536 } 00:36:43.536 ] 00:36:43.536 }, 00:36:43.536 { 00:36:43.536 "subsystem": "scsi", 00:36:43.536 "config": null 00:36:43.536 }, 00:36:43.536 { 00:36:43.536 "subsystem": "scheduler", 00:36:43.536 "config": [ 00:36:43.536 { 00:36:43.536 "method": "framework_set_scheduler", 00:36:43.536 "params": { 00:36:43.536 "name": "static" 00:36:43.536 } 00:36:43.536 } 00:36:43.536 ] 00:36:43.536 }, 00:36:43.536 { 00:36:43.536 "subsystem": "vhost_scsi", 00:36:43.536 "config": [] 00:36:43.536 }, 00:36:43.536 { 00:36:43.536 "subsystem": "vhost_blk", 00:36:43.536 "config": [] 00:36:43.536 }, 00:36:43.536 { 00:36:43.536 "subsystem": "ublk", 00:36:43.536 "config": [ 00:36:43.536 { 00:36:43.536 "method": "ublk_create_target", 00:36:43.536 "params": { 00:36:43.536 "cpumask": "1" 00:36:43.536 } 00:36:43.536 }, 00:36:43.536 { 00:36:43.536 "method": "ublk_start_disk", 00:36:43.536 "params": { 00:36:43.536 "bdev_name": "malloc0", 00:36:43.536 "ublk_id": 0, 00:36:43.536 "num_queues": 1, 00:36:43.536 "queue_depth": 128 00:36:43.536 } 00:36:43.536 } 00:36:43.536 ] 00:36:43.536 }, 00:36:43.536 { 00:36:43.536 "subsystem": "nbd", 00:36:43.536 "config": [] 00:36:43.536 }, 00:36:43.536 { 00:36:43.536 "subsystem": "nvmf", 00:36:43.536 "config": [ 00:36:43.536 { 00:36:43.536 "method": "nvmf_set_config", 00:36:43.536 "params": { 00:36:43.536 "discovery_filter": "match_any", 00:36:43.536 "admin_cmd_passthru": { 00:36:43.536 "identify_ctrlr": false 00:36:43.536 }, 00:36:43.536 "dhchap_digests": [ 00:36:43.536 "sha256", 00:36:43.536 "sha384", 00:36:43.536 "sha512" 00:36:43.536 ], 00:36:43.536 "dhchap_dhgroups": [ 00:36:43.536 "null", 00:36:43.536 "ffdhe2048", 00:36:43.536 "ffdhe3072", 00:36:43.536 "ffdhe4096", 00:36:43.536 "ffdhe6144", 00:36:43.536 "ffdhe8192" 00:36:43.536 ] 00:36:43.536 } 00:36:43.536 }, 00:36:43.536 { 00:36:43.536 "method": 02:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:43.536 "nvmf_set_max_subsystems", 00:36:43.536 "params": { 00:36:43.536 "max_subsystems": 1024 00:36:43.536 } 00:36:43.536 }, 00:36:43.536 { 00:36:43.536 "method": "nvmf_set_crdt", 00:36:43.536 "params": { 00:36:43.536 "crdt1": 0, 00:36:43.536 "crdt2": 0, 00:36:43.536 "crdt3": 0 00:36:43.536 } 00:36:43.536 } 00:36:43.536 ] 00:36:43.536 }, 00:36:43.536 { 00:36:43.536 "subsystem": "iscsi", 00:36:43.536 "config": [ 00:36:43.536 { 00:36:43.536 "method": "iscsi_set_options", 00:36:43.536 "params": { 00:36:43.536 "node_base": "iqn.2016-06.io.spdk", 00:36:43.536 "max_sessions": 128, 00:36:43.536 "max_connections_per_session": 2, 00:36:43.536 "max_queue_depth": 64, 00:36:43.536 "default_time2wait": 2, 00:36:43.536 "default_time2retain": 20, 00:36:43.536 "first_burst_length": 8192, 00:36:43.536 "immediate_data": true, 00:36:43.536 "allow_duplicated_isid": false, 00:36:43.536 "error_recovery_level": 0, 00:36:43.536 "nop_timeout": 60, 00:36:43.536 "nop_in_interval": 30, 00:36:43.536 "disable_chap": false, 00:36:43.536 "require_chap": false, 00:36:43.536 "mutual_chap": false, 00:36:43.536 "chap_group": 0, 00:36:43.536 "max_large_datain_per_connection": 64, 00:36:43.536 "max_r2t_per_connection": 4, 00:36:43.536 "pdu_pool_size": 36864, 00:36:43.536 "immediate_data_pool_size": 16384, 00:36:43.536 "data_out_pool_size": 2048 00:36:43.536 } 00:36:43.536 } 00:36:43.536 ] 00:36:43.536 } 00:36:43.536 ] 00:36:43.536 }' 00:36:43.536 02:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:43.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:43.536 02:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:43.536 02:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:36:43.536 [2024-10-15 02:05:52.153268] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:36:43.536 [2024-10-15 02:05:52.153451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73074 ] 00:36:43.536 [2024-10-15 02:05:52.312460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.536 [2024-10-15 02:05:52.521435] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.471 [2024-10-15 02:05:53.398425] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:36:44.471 [2024-10-15 02:05:53.399591] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:36:44.471 [2024-10-15 02:05:53.406588] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:36:44.471 [2024-10-15 02:05:53.406680] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:36:44.471 [2024-10-15 02:05:53.406693] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:36:44.471 [2024-10-15 02:05:53.406701] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:36:44.471 [2024-10-15 02:05:53.414602] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:36:44.471 [2024-10-15 02:05:53.414624] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:36:44.471 [2024-10-15 02:05:53.421549] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:36:44.471 [2024-10-15 02:05:53.421661] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:36:44.471 [2024-10-15 02:05:53.438535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:36:44.471 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:44.471 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73074 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 73074 ']' 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 73074 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73074 00:36:44.730 killing process with pid 73074 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73074' 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 73074 00:36:44.730 02:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 73074 00:36:46.107 [2024-10-15 02:05:54.843156] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:36:46.107 [2024-10-15 02:05:54.893507] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:36:46.107 [2024-10-15 02:05:54.893643] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:36:46.107 [2024-10-15 02:05:54.904529] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:36:46.107 [2024-10-15 02:05:54.904601] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:36:46.107 [2024-10-15 02:05:54.904615] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:36:46.107 [2024-10-15 02:05:54.904669] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:36:46.107 [2024-10-15 02:05:54.904907] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:36:48.011 02:05:56 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:36:48.011 00:36:48.011 real 0m9.178s 00:36:48.011 user 0m7.001s 00:36:48.011 sys 0m3.148s 00:36:48.011 02:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:48.011 02:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:36:48.011 ************************************ 00:36:48.011 END TEST test_save_ublk_config 00:36:48.011 ************************************ 00:36:48.011 02:05:56 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73158 00:36:48.011 02:05:56 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:48.011 02:05:56 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73158 00:36:48.011 02:05:56 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:36:48.011 02:05:56 ublk -- common/autotest_common.sh@831 -- # '[' -z 73158 ']' 00:36:48.011 02:05:56 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.011 02:05:56 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:48.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.011 02:05:56 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.011 02:05:56 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:48.011 02:05:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:36:48.011 [2024-10-15 02:05:56.733143] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:36:48.011 [2024-10-15 02:05:56.733338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73158 ] 00:36:48.011 [2024-10-15 02:05:56.907098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:48.270 [2024-10-15 02:05:57.105617] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.270 [2024-10-15 02:05:57.105636] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.838 02:05:57 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:48.838 02:05:57 ublk -- common/autotest_common.sh@864 -- # return 0 00:36:48.838 02:05:57 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:36:48.838 02:05:57 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:48.838 02:05:57 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:48.838 02:05:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:36:49.109 ************************************ 00:36:49.109 START TEST test_create_ublk 00:36:49.109 ************************************ 00:36:49.109 02:05:57 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:36:49.109 02:05:57 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:36:49.109 02:05:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.109 02:05:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:36:49.109 [2024-10-15 02:05:57.868524] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:36:49.109 [2024-10-15 02:05:57.870746] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:36:49.109 02:05:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.109 02:05:57 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:36:49.109 02:05:57 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:36:49.109 02:05:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.109 02:05:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:36:49.369 02:05:58 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:36:49.369 02:05:58 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.369 02:05:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:36:49.369 [2024-10-15 02:05:58.137644] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:36:49.369 [2024-10-15 02:05:58.138209] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:36:49.369 [2024-10-15 02:05:58.138235] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:36:49.369 [2024-10-15 02:05:58.138246] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:36:49.369 [2024-10-15 02:05:58.148896] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:36:49.369 [2024-10-15 02:05:58.148916] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:36:49.369 [2024-10-15 02:05:58.155562] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:36:49.369 [2024-10-15 02:05:58.156393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:36:49.369 [2024-10-15 02:05:58.179517] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:36:49.369 02:05:58 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:36:49.369 02:05:58 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.369 02:05:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:36:49.369 02:05:58 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:36:49.369 { 00:36:49.369 "ublk_device": "/dev/ublkb0", 00:36:49.369 "id": 0, 00:36:49.369 "queue_depth": 512, 00:36:49.369 "num_queues": 4, 00:36:49.369 "bdev_name": "Malloc0" 00:36:49.369 } 00:36:49.369 ]' 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:36:49.369 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:36:49.628 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:36:49.628 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:36:49.628 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:36:49.628 02:05:58 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:36:49.628 02:05:58 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:36:49.628 02:05:58 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:36:49.628 02:05:58 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:36:49.628 02:05:58 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:36:49.628 02:05:58 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:36:49.628 02:05:58 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:36:49.628 02:05:58 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:36:49.628 02:05:58 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:36:49.628 02:05:58 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:36:49.628 02:05:58 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:36:49.628 02:05:58 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:36:49.628 fio: verification read phase will never start because write phase uses all of runtime 00:36:49.628 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:36:49.628 fio-3.35 00:36:49.628 Starting 1 process 00:37:01.846 00:37:01.846 fio_test: (groupid=0, jobs=1): err= 0: pid=73210: Tue Oct 15 02:06:08 2024 00:37:01.846 write: IOPS=12.7k, BW=49.8MiB/s (52.2MB/s)(498MiB/10001msec); 0 zone resets 00:37:01.846 clat (usec): min=47, max=4036, avg=77.36, stdev=124.95 00:37:01.846 lat (usec): min=47, max=4037, avg=77.98, stdev=124.97 00:37:01.846 clat percentiles (usec): 00:37:01.846 | 1.00th=[ 53], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 64], 00:37:01.846 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 67], 60.00th=[ 68], 00:37:01.846 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 87], 95.00th=[ 97], 00:37:01.846 | 99.00th=[ 130], 99.50th=[ 165], 99.90th=[ 2573], 99.95th=[ 3064], 00:37:01.846 | 99.99th=[ 3589] 00:37:01.846 bw ( KiB/s): min=48832, max=54440, per=100.00%, avg=50997.05, stdev=1221.06, samples=19 00:37:01.846 iops : min=12208, max=13610, avg=12749.26, stdev=305.27, samples=19 00:37:01.846 lat (usec) : 50=0.01%, 100=95.91%, 250=3.68%, 500=0.08%, 750=0.02% 00:37:01.846 lat (usec) : 1000=0.02% 00:37:01.846 lat (msec) : 2=0.11%, 4=0.17%, 10=0.01% 00:37:01.846 cpu : usr=2.90%, sys=7.43%, ctx=127471, majf=0, minf=795 00:37:01.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:01.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.846 issued rwts: total=0,127469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:01.846 00:37:01.846 Run status group 0 (all jobs): 00:37:01.846 WRITE: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=498MiB (522MB), run=10001-10001msec 00:37:01.846 00:37:01.846 Disk stats (read/write): 00:37:01.846 ublkb0: ios=0/126137, merge=0/0, ticks=0/8926, in_queue=8926, util=99.10% 00:37:01.846 02:06:08 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.846 [2024-10-15 02:06:08.714607] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:37:01.846 [2024-10-15 02:06:08.757114] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:01.846 [2024-10-15 02:06:08.758139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:37:01.846 [2024-10-15 02:06:08.764555] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:01.846 [2024-10-15 02:06:08.764891] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:37:01.846 [2024-10-15 02:06:08.764915] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.846 02:06:08 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.846 [2024-10-15 02:06:08.787611] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:37:01.846 request: 00:37:01.846 { 00:37:01.846 "ublk_id": 0, 00:37:01.846 "method": "ublk_stop_disk", 00:37:01.846 "req_id": 1 00:37:01.846 } 00:37:01.846 Got JSON-RPC error response 00:37:01.846 response: 00:37:01.846 { 00:37:01.846 "code": -19, 00:37:01.846 "message": "No such device" 00:37:01.846 } 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:01.846 02:06:08 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.846 [2024-10-15 02:06:08.796579] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:01.846 [2024-10-15 02:06:08.803545] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:01.846 [2024-10-15 02:06:08.803588] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.846 02:06:08 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.846 02:06:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.846 02:06:09 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.846 02:06:09 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:37:01.846 02:06:09 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:37:01.846 02:06:09 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.846 02:06:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.846 02:06:09 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.846 02:06:09 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:37:01.846 02:06:09 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:37:01.846 02:06:09 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:37:01.846 02:06:09 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:37:01.846 02:06:09 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.846 02:06:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.846 02:06:09 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.846 02:06:09 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:37:01.846 02:06:09 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:37:01.846 02:06:09 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:37:01.846 00:37:01.846 real 0m11.654s 00:37:01.846 user 0m0.745s 00:37:01.846 sys 0m0.847s 00:37:01.846 02:06:09 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:01.846 ************************************ 00:37:01.846 END TEST test_create_ublk 00:37:01.846 02:06:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.846 ************************************ 00:37:01.846 02:06:09 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:37:01.846 02:06:09 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:01.846 02:06:09 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:01.846 02:06:09 ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.846 ************************************ 00:37:01.846 START TEST test_create_multi_ublk 00:37:01.846 ************************************ 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.846 [2024-10-15 02:06:09.583522] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:37:01.846 [2024-10-15 02:06:09.585620] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.846 [2024-10-15 02:06:09.875702] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:37:01.846 [2024-10-15 02:06:09.876260] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:37:01.846 [2024-10-15 02:06:09.876282] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:37:01.846 [2024-10-15 02:06:09.876298] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:37:01.846 [2024-10-15 02:06:09.883967] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:01.846 [2024-10-15 02:06:09.883995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:01.846 [2024-10-15 02:06:09.891575] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:01.846 [2024-10-15 02:06:09.892373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:37:01.846 [2024-10-15 02:06:09.904593] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:37:01.846 02:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:01.847 02:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:37:01.847 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.847 02:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.847 [2024-10-15 02:06:10.170667] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:37:01.847 [2024-10-15 02:06:10.171246] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:37:01.847 [2024-10-15 02:06:10.171284] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:37:01.847 [2024-10-15 02:06:10.171336] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:37:01.847 [2024-10-15 02:06:10.178516] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:01.847 [2024-10-15 02:06:10.178544] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:01.847 [2024-10-15 02:06:10.185535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:01.847 [2024-10-15 02:06:10.186291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:37:01.847 [2024-10-15 02:06:10.194526] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.847 [2024-10-15 02:06:10.447629] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:37:01.847 [2024-10-15 02:06:10.448185] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:37:01.847 [2024-10-15 02:06:10.448206] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:37:01.847 [2024-10-15 02:06:10.448218] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:37:01.847 [2024-10-15 02:06:10.453884] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:01.847 [2024-10-15 02:06:10.453908] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:01.847 [2024-10-15 02:06:10.461577] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:01.847 [2024-10-15 02:06:10.462369] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:37:01.847 [2024-10-15 02:06:10.470529] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.847 [2024-10-15 02:06:10.721644] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:37:01.847 [2024-10-15 02:06:10.722218] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:37:01.847 [2024-10-15 02:06:10.722243] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:37:01.847 [2024-10-15 02:06:10.722253] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:37:01.847 [2024-10-15 02:06:10.729605] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:01.847 [2024-10-15 02:06:10.729648] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:01.847 [2024-10-15 02:06:10.737465] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:01.847 [2024-10-15 02:06:10.738209] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:37:01.847 [2024-10-15 02:06:10.746523] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:37:01.847 { 00:37:01.847 "ublk_device": "/dev/ublkb0", 00:37:01.847 "id": 0, 00:37:01.847 "queue_depth": 512, 00:37:01.847 "num_queues": 4, 00:37:01.847 "bdev_name": "Malloc0" 00:37:01.847 }, 00:37:01.847 { 00:37:01.847 "ublk_device": "/dev/ublkb1", 00:37:01.847 "id": 1, 00:37:01.847 "queue_depth": 512, 00:37:01.847 "num_queues": 4, 00:37:01.847 "bdev_name": "Malloc1" 00:37:01.847 }, 00:37:01.847 { 00:37:01.847 "ublk_device": "/dev/ublkb2", 00:37:01.847 "id": 2, 00:37:01.847 "queue_depth": 512, 00:37:01.847 "num_queues": 4, 00:37:01.847 "bdev_name": "Malloc2" 00:37:01.847 }, 00:37:01.847 { 00:37:01.847 "ublk_device": "/dev/ublkb3", 00:37:01.847 "id": 3, 00:37:01.847 "queue_depth": 512, 00:37:01.847 "num_queues": 4, 00:37:01.847 "bdev_name": "Malloc3" 00:37:01.847 } 00:37:01.847 ]' 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:37:01.847 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:37:02.106 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:37:02.106 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:37:02.106 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:37:02.106 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:37:02.106 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:37:02.106 02:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:37:02.106 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:37:02.106 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:02.106 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:37:02.106 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:37:02.106 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:37:02.364 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:37:02.364 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:37:02.364 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:37:02.364 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:37:02.364 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:37:02.365 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:37:02.365 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:37:02.365 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:02.365 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:37:02.365 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:37:02.365 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:37:02.623 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:37:02.623 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:37:02.623 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:37:02.623 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:37:02.623 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:37:02.623 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:37:02.623 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:37:02.623 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:02.623 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:37:02.623 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:37:02.623 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:37:02.882 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:37:02.882 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:37:02.882 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:37:02.882 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:37:02.882 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:02.883 [2024-10-15 02:06:11.838726] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:37:02.883 [2024-10-15 02:06:11.879588] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:02.883 [2024-10-15 02:06:11.880630] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:37:02.883 [2024-10-15 02:06:11.886537] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:02.883 [2024-10-15 02:06:11.886891] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:37:02.883 [2024-10-15 02:06:11.886916] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.883 02:06:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:03.141 [2024-10-15 02:06:11.901538] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:37:03.141 [2024-10-15 02:06:11.941550] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:03.141 [2024-10-15 02:06:11.942687] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:37:03.141 [2024-10-15 02:06:11.951652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:03.141 [2024-10-15 02:06:11.952051] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:37:03.141 [2024-10-15 02:06:11.952076] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:37:03.141 02:06:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.141 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:03.141 02:06:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:37:03.141 02:06:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.141 02:06:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:03.141 [2024-10-15 02:06:11.959634] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:37:03.141 [2024-10-15 02:06:11.992012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:03.141 [2024-10-15 02:06:11.993028] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:37:03.141 [2024-10-15 02:06:11.999540] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:03.141 [2024-10-15 02:06:11.999855] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:37:03.141 [2024-10-15 02:06:11.999881] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:37:03.141 02:06:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.141 02:06:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:03.141 02:06:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:37:03.141 02:06:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.141 02:06:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:03.141 [2024-10-15 02:06:12.014587] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:37:03.141 [2024-10-15 02:06:12.042896] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:03.141 [2024-10-15 02:06:12.044012] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:37:03.141 [2024-10-15 02:06:12.049606] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:03.141 [2024-10-15 02:06:12.049966] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:37:03.141 [2024-10-15 02:06:12.049995] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:37:03.141 02:06:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.141 02:06:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:37:03.400 [2024-10-15 02:06:12.347590] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:03.400 [2024-10-15 02:06:12.350295] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:03.400 [2024-10-15 02:06:12.350328] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:37:03.400 02:06:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:37:03.400 02:06:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:03.400 02:06:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:37:03.400 02:06:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.400 02:06:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:03.968 02:06:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.968 02:06:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:03.968 02:06:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:37:03.968 02:06:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.968 02:06:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:04.535 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.535 02:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:04.535 02:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:37:04.535 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.535 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:04.794 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.794 02:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:04.794 02:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:37:04.794 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.794 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:37:05.052 ************************************ 00:37:05.052 END TEST test_create_multi_ublk 00:37:05.052 ************************************ 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:37:05.052 00:37:05.052 real 0m4.415s 00:37:05.052 user 0m1.364s 00:37:05.052 sys 0m0.173s 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:05.052 02:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:05.052 02:06:14 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:05.052 02:06:14 ublk -- ublk/ublk.sh@147 -- # cleanup 00:37:05.052 02:06:14 ublk -- ublk/ublk.sh@130 -- # killprocess 73158 00:37:05.052 02:06:14 ublk -- common/autotest_common.sh@950 -- # '[' -z 73158 ']' 00:37:05.052 02:06:14 ublk -- common/autotest_common.sh@954 -- # kill -0 73158 00:37:05.052 02:06:14 ublk -- common/autotest_common.sh@955 -- # uname 00:37:05.052 02:06:14 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:05.052 02:06:14 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73158 00:37:05.052 killing process with pid 73158 00:37:05.052 02:06:14 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:05.052 02:06:14 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:05.052 02:06:14 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73158' 00:37:05.053 02:06:14 ublk -- common/autotest_common.sh@969 -- # kill 73158 00:37:05.053 02:06:14 ublk -- common/autotest_common.sh@974 -- # wait 73158 00:37:05.988 [2024-10-15 02:06:14.924049] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:05.988 [2024-10-15 02:06:14.924130] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:07.366 00:37:07.366 real 0m28.918s 00:37:07.366 user 0m41.677s 00:37:07.366 sys 0m10.185s 00:37:07.366 02:06:16 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:07.366 02:06:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:37:07.366 ************************************ 00:37:07.366 END TEST ublk 00:37:07.366 ************************************ 00:37:07.366 02:06:16 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:37:07.366 02:06:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:07.366 02:06:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:07.366 02:06:16 -- common/autotest_common.sh@10 -- # set +x 00:37:07.366 ************************************ 00:37:07.366 START TEST ublk_recovery 00:37:07.366 ************************************ 00:37:07.366 02:06:16 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:37:07.366 * Looking for test storage... 00:37:07.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:37:07.366 02:06:16 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:07.366 02:06:16 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:07.366 02:06:16 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:37:07.366 02:06:16 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:07.366 02:06:16 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:37:07.366 02:06:16 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:07.366 02:06:16 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:07.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.366 --rc genhtml_branch_coverage=1 00:37:07.366 --rc genhtml_function_coverage=1 00:37:07.366 --rc genhtml_legend=1 00:37:07.366 --rc geninfo_all_blocks=1 00:37:07.366 --rc geninfo_unexecuted_blocks=1 00:37:07.366 00:37:07.366 ' 00:37:07.366 02:06:16 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:07.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.366 --rc genhtml_branch_coverage=1 00:37:07.366 --rc genhtml_function_coverage=1 00:37:07.366 --rc genhtml_legend=1 00:37:07.366 --rc geninfo_all_blocks=1 00:37:07.366 --rc geninfo_unexecuted_blocks=1 00:37:07.366 00:37:07.366 ' 00:37:07.366 02:06:16 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:07.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.366 --rc genhtml_branch_coverage=1 00:37:07.366 --rc genhtml_function_coverage=1 00:37:07.366 --rc genhtml_legend=1 00:37:07.366 --rc geninfo_all_blocks=1 00:37:07.366 --rc geninfo_unexecuted_blocks=1 00:37:07.366 00:37:07.366 ' 00:37:07.366 02:06:16 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:07.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.366 --rc genhtml_branch_coverage=1 00:37:07.366 --rc genhtml_function_coverage=1 00:37:07.366 --rc genhtml_legend=1 00:37:07.366 --rc geninfo_all_blocks=1 00:37:07.366 --rc geninfo_unexecuted_blocks=1 00:37:07.366 00:37:07.366 ' 00:37:07.366 02:06:16 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:37:07.366 02:06:16 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:37:07.366 02:06:16 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:37:07.366 02:06:16 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:37:07.366 02:06:16 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:37:07.366 02:06:16 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:37:07.366 02:06:16 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:37:07.367 02:06:16 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:37:07.367 02:06:16 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:37:07.367 02:06:16 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:37:07.367 02:06:16 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73577 00:37:07.367 02:06:16 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:07.367 02:06:16 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:37:07.367 02:06:16 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73577 00:37:07.367 02:06:16 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73577 ']' 00:37:07.367 02:06:16 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.367 02:06:16 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:07.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.367 02:06:16 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.367 02:06:16 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:07.367 02:06:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.625 [2024-10-15 02:06:16.435603] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:37:07.625 [2024-10-15 02:06:16.435777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73577 ] 00:37:07.625 [2024-10-15 02:06:16.596308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:07.885 [2024-10-15 02:06:16.791390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.885 [2024-10-15 02:06:16.791441] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:08.821 02:06:17 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:08.821 02:06:17 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:37:08.821 02:06:17 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:37:08.821 02:06:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.821 02:06:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.821 [2024-10-15 02:06:17.574504] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:37:08.821 [2024-10-15 02:06:17.576659] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:37:08.821 02:06:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.821 02:06:17 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:37:08.821 02:06:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.821 02:06:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.821 malloc0 00:37:08.821 02:06:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.821 02:06:17 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:37:08.821 02:06:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.821 02:06:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.821 [2024-10-15 02:06:17.703687] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:37:08.821 [2024-10-15 02:06:17.703823] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:37:08.821 [2024-10-15 02:06:17.703843] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:37:08.822 [2024-10-15 02:06:17.703852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:37:08.822 [2024-10-15 02:06:17.711776] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:08.822 [2024-10-15 02:06:17.711803] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:08.822 [2024-10-15 02:06:17.719605] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:08.822 [2024-10-15 02:06:17.719789] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:37:08.822 [2024-10-15 02:06:17.735506] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:37:08.822 1 00:37:08.822 02:06:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.822 02:06:17 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:37:09.761 02:06:18 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73618 00:37:09.761 02:06:18 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:37:09.761 02:06:18 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:37:10.020 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:10.020 fio-3.35 00:37:10.020 Starting 1 process 00:37:15.326 02:06:23 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73577 00:37:15.326 02:06:23 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:37:20.593 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73577 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:37:20.593 02:06:28 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73734 00:37:20.593 02:06:28 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:20.593 02:06:28 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:37:20.593 02:06:28 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73734 00:37:20.593 02:06:28 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73734 ']' 00:37:20.593 02:06:28 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:20.593 02:06:28 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:20.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:20.593 02:06:28 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:20.593 02:06:28 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:20.593 02:06:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:20.593 [2024-10-15 02:06:28.887317] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:37:20.593 [2024-10-15 02:06:28.887516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73734 ] 00:37:20.593 [2024-10-15 02:06:29.063368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:20.593 [2024-10-15 02:06:29.298951] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.593 [2024-10-15 02:06:29.298971] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.160 02:06:30 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:21.160 02:06:30 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:37:21.160 02:06:30 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:37:21.160 02:06:30 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.160 02:06:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:21.160 [2024-10-15 02:06:30.100507] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:37:21.160 [2024-10-15 02:06:30.102694] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:37:21.160 02:06:30 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.160 02:06:30 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:37:21.160 02:06:30 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.160 02:06:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:21.418 malloc0 00:37:21.418 02:06:30 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.418 02:06:30 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:37:21.418 02:06:30 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.418 02:06:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:21.418 [2024-10-15 02:06:30.234637] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:37:21.418 [2024-10-15 02:06:30.234690] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:37:21.418 [2024-10-15 02:06:30.234707] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:37:21.418 [2024-10-15 02:06:30.241543] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:37:21.418 [2024-10-15 02:06:30.241571] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:37:21.418 1 00:37:21.418 02:06:30 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.418 02:06:30 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73618 00:37:22.353 [2024-10-15 02:06:31.242453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:37:22.353 [2024-10-15 02:06:31.249501] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:37:22.353 [2024-10-15 02:06:31.249524] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:37:23.290 [2024-10-15 02:06:32.249571] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:37:23.290 [2024-10-15 02:06:32.253464] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:37:23.290 [2024-10-15 02:06:32.253505] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:37:24.665 [2024-10-15 02:06:33.255482] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:37:24.665 [2024-10-15 02:06:33.263497] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:37:24.665 [2024-10-15 02:06:33.263520] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:37:24.665 [2024-10-15 02:06:33.263550] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:37:24.665 [2024-10-15 02:06:33.263656] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:37:46.598 [2024-10-15 02:06:54.216605] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:37:46.598 [2024-10-15 02:06:54.224272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:37:46.598 [2024-10-15 02:06:54.230807] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:37:46.598 [2024-10-15 02:06:54.230883] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:38:13.161 00:38:13.161 fio_test: (groupid=0, jobs=1): err= 0: pid=73621: Tue Oct 15 02:07:19 2024 00:38:13.161 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(2591MiB/60002msec) 00:38:13.161 slat (nsec): min=1915, max=425535, avg=5757.78, stdev=3237.86 00:38:13.161 clat (usec): min=858, max=30489k, avg=5689.77, stdev=297061.41 00:38:13.161 lat (usec): min=877, max=30489k, avg=5695.53, stdev=297061.41 00:38:13.161 clat percentiles (usec): 00:38:13.161 | 1.00th=[ 2311], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2540], 00:38:13.161 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:38:13.161 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 3064], 95.00th=[ 3884], 00:38:13.161 | 99.00th=[ 5800], 99.50th=[ 6390], 99.90th=[ 8029], 99.95th=[ 8455], 00:38:13.161 | 99.99th=[13173] 00:38:13.161 bw ( KiB/s): min=25512, max=95417, per=100.00%, avg=88593.68, stdev=11788.22, samples=59 00:38:13.161 iops : min= 6378, max=23854, avg=22148.41, stdev=2947.05, samples=59 00:38:13.161 write: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(2589MiB/60002msec); 0 zone resets 00:38:13.161 slat (usec): min=2, max=2422, avg= 5.97, stdev= 4.45 00:38:13.161 clat (usec): min=852, max=30490k, avg=5880.04, stdev=301903.22 00:38:13.161 lat (usec): min=858, max=30490k, avg=5886.01, stdev=301903.22 00:38:13.161 clat percentiles (usec): 00:38:13.161 | 1.00th=[ 2343], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2638], 00:38:13.161 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:38:13.161 | 70.00th=[ 2835], 80.00th=[ 2900], 90.00th=[ 3130], 95.00th=[ 3785], 00:38:13.161 | 99.00th=[ 5932], 99.50th=[ 6456], 99.90th=[ 8094], 99.95th=[ 8586], 00:38:13.161 | 99.99th=[13304] 00:38:13.161 bw ( KiB/s): min=25680, max=95072, per=100.00%, avg=88520.19, stdev=11679.99, samples=59 00:38:13.161 iops : min= 6420, max=23768, avg=22130.03, stdev=2919.99, samples=59 00:38:13.161 lat (usec) : 1000=0.01% 00:38:13.161 lat (msec) : 2=0.20%, 4=95.24%, 10=4.53%, 20=0.01%, >=2000=0.01% 00:38:13.161 cpu : usr=5.01%, sys=12.09%, ctx=41452, majf=0, minf=13 00:38:13.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:38:13.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:13.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:13.161 issued rwts: total=663423,662718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:13.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:13.161 00:38:13.161 Run status group 0 (all jobs): 00:38:13.161 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=2591MiB (2717MB), run=60002-60002msec 00:38:13.161 WRITE: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=2589MiB (2714MB), run=60002-60002msec 00:38:13.161 00:38:13.161 Disk stats (read/write): 00:38:13.161 ublkb1: ios=660901/660193, merge=0/0, ticks=3714431/3772417, in_queue=7486848, util=99.93% 00:38:13.161 02:07:19 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:13.161 [2024-10-15 02:07:19.018745] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:38:13.161 [2024-10-15 02:07:19.065453] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:38:13.161 [2024-10-15 02:07:19.065623] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:38:13.161 [2024-10-15 02:07:19.076604] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:38:13.161 [2024-10-15 02:07:19.076808] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:38:13.161 [2024-10-15 02:07:19.076824] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.161 02:07:19 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:13.161 [2024-10-15 02:07:19.080696] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:38:13.161 [2024-10-15 02:07:19.085437] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:38:13.161 [2024-10-15 02:07:19.085505] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.161 02:07:19 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:38:13.161 02:07:19 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:38:13.161 02:07:19 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73734 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 73734 ']' 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 73734 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73734 00:38:13.161 killing process with pid 73734 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73734' 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@969 -- # kill 73734 00:38:13.161 02:07:19 ublk_recovery -- common/autotest_common.sh@974 -- # wait 73734 00:38:13.161 [2024-10-15 02:07:20.465171] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:38:13.161 [2024-10-15 02:07:20.465239] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:38:13.161 ************************************ 00:38:13.161 END TEST ublk_recovery 00:38:13.161 ************************************ 00:38:13.161 00:38:13.161 real 1m5.554s 00:38:13.161 user 1m48.488s 00:38:13.161 sys 0m21.752s 00:38:13.161 02:07:21 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:13.161 02:07:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:13.161 02:07:21 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:38:13.161 02:07:21 -- spdk/autotest.sh@256 -- # timing_exit lib 00:38:13.161 02:07:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:13.161 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:38:13.161 02:07:21 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:38:13.161 02:07:21 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:38:13.161 02:07:21 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:38:13.161 02:07:21 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:13.161 02:07:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:13.161 02:07:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:13.161 02:07:21 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:13.161 02:07:21 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:13.161 02:07:21 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:13.161 02:07:21 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:38:13.161 02:07:21 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:38:13.161 02:07:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:13.161 02:07:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:13.161 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:38:13.161 ************************************ 00:38:13.161 START TEST ftl 00:38:13.161 ************************************ 00:38:13.161 02:07:21 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:38:13.161 * Looking for test storage... 00:38:13.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:38:13.161 02:07:21 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:13.161 02:07:21 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:38:13.161 02:07:21 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:13.161 02:07:21 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:13.161 02:07:21 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:13.161 02:07:21 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:13.161 02:07:21 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:13.161 02:07:21 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:38:13.161 02:07:21 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:38:13.161 02:07:21 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:38:13.161 02:07:21 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:38:13.161 02:07:21 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:38:13.161 02:07:21 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:38:13.161 02:07:21 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:38:13.161 02:07:21 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:13.161 02:07:21 ftl -- scripts/common.sh@344 -- # case "$op" in 00:38:13.161 02:07:21 ftl -- scripts/common.sh@345 -- # : 1 00:38:13.161 02:07:21 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:13.161 02:07:21 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:13.161 02:07:21 ftl -- scripts/common.sh@365 -- # decimal 1 00:38:13.161 02:07:21 ftl -- scripts/common.sh@353 -- # local d=1 00:38:13.161 02:07:21 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:13.161 02:07:21 ftl -- scripts/common.sh@355 -- # echo 1 00:38:13.161 02:07:21 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:38:13.161 02:07:21 ftl -- scripts/common.sh@366 -- # decimal 2 00:38:13.161 02:07:21 ftl -- scripts/common.sh@353 -- # local d=2 00:38:13.161 02:07:21 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:13.161 02:07:21 ftl -- scripts/common.sh@355 -- # echo 2 00:38:13.161 02:07:21 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:38:13.161 02:07:21 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:13.161 02:07:21 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:13.161 02:07:21 ftl -- scripts/common.sh@368 -- # return 0 00:38:13.161 02:07:21 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:13.161 02:07:21 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:13.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.161 --rc genhtml_branch_coverage=1 00:38:13.161 --rc genhtml_function_coverage=1 00:38:13.161 --rc genhtml_legend=1 00:38:13.161 --rc geninfo_all_blocks=1 00:38:13.161 --rc geninfo_unexecuted_blocks=1 00:38:13.161 00:38:13.161 ' 00:38:13.161 02:07:21 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:13.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.161 --rc genhtml_branch_coverage=1 00:38:13.162 --rc genhtml_function_coverage=1 00:38:13.162 --rc genhtml_legend=1 00:38:13.162 --rc geninfo_all_blocks=1 00:38:13.162 --rc geninfo_unexecuted_blocks=1 00:38:13.162 00:38:13.162 ' 00:38:13.162 02:07:21 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:13.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.162 --rc genhtml_branch_coverage=1 00:38:13.162 --rc genhtml_function_coverage=1 00:38:13.162 --rc genhtml_legend=1 00:38:13.162 --rc geninfo_all_blocks=1 00:38:13.162 --rc geninfo_unexecuted_blocks=1 00:38:13.162 00:38:13.162 ' 00:38:13.162 02:07:21 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:13.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.162 --rc genhtml_branch_coverage=1 00:38:13.162 --rc genhtml_function_coverage=1 00:38:13.162 --rc genhtml_legend=1 00:38:13.162 --rc geninfo_all_blocks=1 00:38:13.162 --rc geninfo_unexecuted_blocks=1 00:38:13.162 00:38:13.162 ' 00:38:13.162 02:07:21 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:38:13.162 02:07:21 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:38:13.162 02:07:21 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:38:13.162 02:07:21 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:38:13.162 02:07:21 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:38:13.162 02:07:21 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:13.162 02:07:21 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:13.162 02:07:21 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:38:13.162 02:07:21 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:38:13.162 02:07:21 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:13.162 02:07:21 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:13.162 02:07:21 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:38:13.162 02:07:21 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:38:13.162 02:07:21 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:13.162 02:07:21 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:13.162 02:07:21 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:38:13.162 02:07:21 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:38:13.162 02:07:21 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:13.162 02:07:21 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:13.162 02:07:21 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:38:13.162 02:07:21 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:38:13.162 02:07:21 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:13.162 02:07:21 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:13.162 02:07:21 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:13.162 02:07:21 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:13.162 02:07:21 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:38:13.162 02:07:21 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:38:13.162 02:07:21 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:13.162 02:07:21 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:13.162 02:07:21 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:13.162 02:07:21 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:38:13.162 02:07:21 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:38:13.162 02:07:21 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:38:13.162 02:07:21 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:38:13.162 02:07:21 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:13.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:13.678 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:13.678 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:13.678 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:13.678 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:13.678 02:07:22 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74532 00:38:13.678 02:07:22 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:38:13.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.678 02:07:22 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74532 00:38:13.678 02:07:22 ftl -- common/autotest_common.sh@831 -- # '[' -z 74532 ']' 00:38:13.678 02:07:22 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.678 02:07:22 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:13.678 02:07:22 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.678 02:07:22 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:13.678 02:07:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:13.678 [2024-10-15 02:07:22.608926] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:38:13.678 [2024-10-15 02:07:22.609334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74532 ] 00:38:13.937 [2024-10-15 02:07:22.786455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.195 [2024-10-15 02:07:23.053497] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.762 02:07:23 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:14.762 02:07:23 ftl -- common/autotest_common.sh@864 -- # return 0 00:38:14.762 02:07:23 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:38:15.020 02:07:23 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:38:15.956 02:07:24 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:38:15.956 02:07:24 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:16.523 02:07:25 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:38:16.523 02:07:25 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:38:16.523 02:07:25 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:38:16.523 02:07:25 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:38:16.523 02:07:25 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:38:16.523 02:07:25 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:38:16.523 02:07:25 ftl -- ftl/ftl.sh@50 -- # break 00:38:16.523 02:07:25 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:38:16.523 02:07:25 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:38:16.523 02:07:25 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:38:16.523 02:07:25 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:38:16.782 02:07:25 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:38:16.782 02:07:25 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:38:16.782 02:07:25 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:38:16.782 02:07:25 ftl -- ftl/ftl.sh@63 -- # break 00:38:16.782 02:07:25 ftl -- ftl/ftl.sh@66 -- # killprocess 74532 00:38:16.782 02:07:25 ftl -- common/autotest_common.sh@950 -- # '[' -z 74532 ']' 00:38:16.782 02:07:25 ftl -- common/autotest_common.sh@954 -- # kill -0 74532 00:38:16.782 02:07:25 ftl -- common/autotest_common.sh@955 -- # uname 00:38:16.782 02:07:25 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:16.782 02:07:25 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74532 00:38:16.782 02:07:25 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:16.782 02:07:25 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:16.782 02:07:25 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74532' 00:38:16.782 killing process with pid 74532 00:38:16.782 02:07:25 ftl -- common/autotest_common.sh@969 -- # kill 74532 00:38:16.782 02:07:25 ftl -- common/autotest_common.sh@974 -- # wait 74532 00:38:18.684 02:07:27 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:38:18.684 02:07:27 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:38:18.684 02:07:27 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:38:18.684 02:07:27 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:18.684 02:07:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:18.684 ************************************ 00:38:18.684 START TEST ftl_fio_basic 00:38:18.684 ************************************ 00:38:18.684 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:38:18.943 * Looking for test storage... 00:38:18.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:18.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.943 --rc genhtml_branch_coverage=1 00:38:18.943 --rc genhtml_function_coverage=1 00:38:18.943 --rc genhtml_legend=1 00:38:18.943 --rc geninfo_all_blocks=1 00:38:18.943 --rc geninfo_unexecuted_blocks=1 00:38:18.943 00:38:18.943 ' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:18.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.943 --rc genhtml_branch_coverage=1 00:38:18.943 --rc genhtml_function_coverage=1 00:38:18.943 --rc genhtml_legend=1 00:38:18.943 --rc geninfo_all_blocks=1 00:38:18.943 --rc geninfo_unexecuted_blocks=1 00:38:18.943 00:38:18.943 ' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:18.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.943 --rc genhtml_branch_coverage=1 00:38:18.943 --rc genhtml_function_coverage=1 00:38:18.943 --rc genhtml_legend=1 00:38:18.943 --rc geninfo_all_blocks=1 00:38:18.943 --rc geninfo_unexecuted_blocks=1 00:38:18.943 00:38:18.943 ' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:18.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.943 --rc genhtml_branch_coverage=1 00:38:18.943 --rc genhtml_function_coverage=1 00:38:18.943 --rc genhtml_legend=1 00:38:18.943 --rc geninfo_all_blocks=1 00:38:18.943 --rc geninfo_unexecuted_blocks=1 00:38:18.943 00:38:18.943 ' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:18.943 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:38:18.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74671 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74671 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 74671 ']' 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:18.944 02:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:19.202 [2024-10-15 02:07:27.973481] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:38:19.202 [2024-10-15 02:07:27.973914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74671 ] 00:38:19.202 [2024-10-15 02:07:28.150474] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:19.461 [2024-10-15 02:07:28.352432] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.461 [2024-10-15 02:07:28.352513] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.461 [2024-10-15 02:07:28.352534] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:38:20.401 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:20.401 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:38:20.401 02:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:38:20.401 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:38:20.401 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:38:20.401 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:38:20.401 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:38:20.401 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:20.659 [2024-10-15 02:07:29.447702] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200036416720 was disconnected and freed. delete nvme_qpair. 00:38:20.659 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:38:20.659 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:38:20.659 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:38:20.659 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:38:20.659 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:20.659 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:38:20.659 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:38:20.659 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:38:20.918 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:20.918 { 00:38:20.918 "name": "nvme0n1", 00:38:20.918 "aliases": [ 00:38:20.918 "58bc0f2d-5c08-458e-bda0-f43eee1422e8" 00:38:20.918 ], 00:38:20.918 "product_name": "NVMe disk", 00:38:20.918 "block_size": 4096, 00:38:20.918 "num_blocks": 1310720, 00:38:20.918 "uuid": "58bc0f2d-5c08-458e-bda0-f43eee1422e8", 00:38:20.918 "numa_id": -1, 00:38:20.918 "assigned_rate_limits": { 00:38:20.918 "rw_ios_per_sec": 0, 00:38:20.918 "rw_mbytes_per_sec": 0, 00:38:20.918 "r_mbytes_per_sec": 0, 00:38:20.918 "w_mbytes_per_sec": 0 00:38:20.918 }, 00:38:20.918 "claimed": false, 00:38:20.918 "zoned": false, 00:38:20.918 "supported_io_types": { 00:38:20.918 "read": true, 00:38:20.918 "write": true, 00:38:20.918 "unmap": true, 00:38:20.918 "flush": true, 00:38:20.918 "reset": true, 00:38:20.918 "nvme_admin": true, 00:38:20.918 "nvme_io": true, 00:38:20.918 "nvme_io_md": false, 00:38:20.918 "write_zeroes": true, 00:38:20.918 "zcopy": false, 00:38:20.918 "get_zone_info": false, 00:38:20.918 "zone_management": false, 00:38:20.918 "zone_append": false, 00:38:20.918 "compare": true, 00:38:20.918 "compare_and_write": false, 00:38:20.918 "abort": true, 00:38:20.918 "seek_hole": false, 00:38:20.918 "seek_data": false, 00:38:20.918 "copy": true, 00:38:20.918 "nvme_iov_md": false 00:38:20.918 }, 00:38:20.918 "driver_specific": { 00:38:20.918 "nvme": [ 00:38:20.918 { 00:38:20.918 "pci_address": "0000:00:11.0", 00:38:20.918 "trid": { 00:38:20.918 "trtype": "PCIe", 00:38:20.918 "traddr": "0000:00:11.0" 00:38:20.918 }, 00:38:20.918 "ctrlr_data": { 00:38:20.918 "cntlid": 0, 00:38:20.918 "vendor_id": "0x1b36", 00:38:20.918 "model_number": "QEMU NVMe Ctrl", 00:38:20.918 "serial_number": "12341", 00:38:20.919 "firmware_revision": "8.0.0", 00:38:20.919 "subnqn": "nqn.2019-08.org.qemu:12341", 00:38:20.919 "oacs": { 00:38:20.919 "security": 0, 00:38:20.919 "format": 1, 00:38:20.919 "firmware": 0, 00:38:20.919 "ns_manage": 1 00:38:20.919 }, 00:38:20.919 "multi_ctrlr": false, 00:38:20.919 "ana_reporting": false 00:38:20.919 }, 00:38:20.919 "vs": { 00:38:20.919 "nvme_version": "1.4" 00:38:20.919 }, 00:38:20.919 "ns_data": { 00:38:20.919 "id": 1, 00:38:20.919 "can_share": false 00:38:20.919 } 00:38:20.919 } 00:38:20.919 ], 00:38:20.919 "mp_policy": "active_passive" 00:38:20.919 } 00:38:20.919 } 00:38:20.919 ]' 00:38:20.919 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:20.919 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:38:20.919 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:20.919 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:38:20.919 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:38:20.919 02:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:38:20.919 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:38:20.919 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:38:20.919 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:38:20.919 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:20.919 02:07:29 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:21.177 02:07:30 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:38:21.177 02:07:30 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:38:21.435 02:07:30 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=f23b9cde-f106-472f-bd47-120e9b6f9b2e 00:38:21.435 02:07:30 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f23b9cde-f106-472f-bd47-120e9b6f9b2e 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:38:21.694 02:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:21.953 02:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:21.953 { 00:38:21.953 "name": "4092d8cb-47b5-4ed6-a0fc-a303a158f684", 00:38:21.953 "aliases": [ 00:38:21.953 "lvs/nvme0n1p0" 00:38:21.953 ], 00:38:21.953 "product_name": "Logical Volume", 00:38:21.953 "block_size": 4096, 00:38:21.953 "num_blocks": 26476544, 00:38:21.953 "uuid": "4092d8cb-47b5-4ed6-a0fc-a303a158f684", 00:38:21.953 "assigned_rate_limits": { 00:38:21.953 "rw_ios_per_sec": 0, 00:38:21.953 "rw_mbytes_per_sec": 0, 00:38:21.953 "r_mbytes_per_sec": 0, 00:38:21.953 "w_mbytes_per_sec": 0 00:38:21.953 }, 00:38:21.953 "claimed": false, 00:38:21.953 "zoned": false, 00:38:21.953 "supported_io_types": { 00:38:21.953 "read": true, 00:38:21.953 "write": true, 00:38:21.953 "unmap": true, 00:38:21.953 "flush": false, 00:38:21.953 "reset": true, 00:38:21.953 "nvme_admin": false, 00:38:21.953 "nvme_io": false, 00:38:21.953 "nvme_io_md": false, 00:38:21.953 "write_zeroes": true, 00:38:21.953 "zcopy": false, 00:38:21.953 "get_zone_info": false, 00:38:21.953 "zone_management": false, 00:38:21.953 "zone_append": false, 00:38:21.953 "compare": false, 00:38:21.953 "compare_and_write": false, 00:38:21.953 "abort": false, 00:38:21.953 "seek_hole": true, 00:38:21.953 "seek_data": true, 00:38:21.953 "copy": false, 00:38:21.953 "nvme_iov_md": false 00:38:21.953 }, 00:38:21.953 "driver_specific": { 00:38:21.953 "lvol": { 00:38:21.953 "lvol_store_uuid": "f23b9cde-f106-472f-bd47-120e9b6f9b2e", 00:38:21.953 "base_bdev": "nvme0n1", 00:38:21.953 "thin_provision": true, 00:38:21.953 "num_allocated_clusters": 0, 00:38:21.953 "snapshot": false, 00:38:21.953 "clone": false, 00:38:21.953 "esnap_clone": false 00:38:21.953 } 00:38:21.953 } 00:38:21.953 } 00:38:21.953 ]' 00:38:21.953 02:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:22.211 02:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:38:22.211 02:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:22.211 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:38:22.211 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:38:22.211 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:38:22.211 02:07:31 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:38:22.211 02:07:31 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:38:22.211 02:07:31 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:38:22.469 [2024-10-15 02:07:31.344693] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035039da0 was disconnected and freed. delete nvme_qpair. 00:38:22.469 02:07:31 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:38:22.469 02:07:31 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:38:22.470 02:07:31 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:22.470 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:22.470 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:22.470 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:38:22.470 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:38:22.470 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:22.739 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:22.739 { 00:38:22.739 "name": "4092d8cb-47b5-4ed6-a0fc-a303a158f684", 00:38:22.739 "aliases": [ 00:38:22.739 "lvs/nvme0n1p0" 00:38:22.739 ], 00:38:22.739 "product_name": "Logical Volume", 00:38:22.739 "block_size": 4096, 00:38:22.739 "num_blocks": 26476544, 00:38:22.739 "uuid": "4092d8cb-47b5-4ed6-a0fc-a303a158f684", 00:38:22.739 "assigned_rate_limits": { 00:38:22.739 "rw_ios_per_sec": 0, 00:38:22.739 "rw_mbytes_per_sec": 0, 00:38:22.739 "r_mbytes_per_sec": 0, 00:38:22.739 "w_mbytes_per_sec": 0 00:38:22.739 }, 00:38:22.739 "claimed": false, 00:38:22.739 "zoned": false, 00:38:22.739 "supported_io_types": { 00:38:22.739 "read": true, 00:38:22.739 "write": true, 00:38:22.739 "unmap": true, 00:38:22.739 "flush": false, 00:38:22.739 "reset": true, 00:38:22.739 "nvme_admin": false, 00:38:22.739 "nvme_io": false, 00:38:22.739 "nvme_io_md": false, 00:38:22.739 "write_zeroes": true, 00:38:22.739 "zcopy": false, 00:38:22.739 "get_zone_info": false, 00:38:22.739 "zone_management": false, 00:38:22.739 "zone_append": false, 00:38:22.740 "compare": false, 00:38:22.740 "compare_and_write": false, 00:38:22.740 "abort": false, 00:38:22.740 "seek_hole": true, 00:38:22.740 "seek_data": true, 00:38:22.740 "copy": false, 00:38:22.740 "nvme_iov_md": false 00:38:22.740 }, 00:38:22.740 "driver_specific": { 00:38:22.740 "lvol": { 00:38:22.740 "lvol_store_uuid": "f23b9cde-f106-472f-bd47-120e9b6f9b2e", 00:38:22.740 "base_bdev": "nvme0n1", 00:38:22.740 "thin_provision": true, 00:38:22.740 "num_allocated_clusters": 0, 00:38:22.740 "snapshot": false, 00:38:22.740 "clone": false, 00:38:22.740 "esnap_clone": false 00:38:22.740 } 00:38:22.740 } 00:38:22.740 } 00:38:22.740 ]' 00:38:22.740 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:22.740 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:38:22.740 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:22.740 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:38:22.740 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:38:22.740 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:38:22.740 02:07:31 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:38:22.740 02:07:31 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:38:23.008 [2024-10-15 02:07:31.964715] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035039da0 was disconnected and freed. delete nvme_qpair. 00:38:23.008 02:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:38:23.008 02:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:38:23.008 02:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:38:23.008 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:38:23.008 02:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:23.008 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:23.008 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:23.008 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:38:23.008 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:38:23.008 02:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4092d8cb-47b5-4ed6-a0fc-a303a158f684 00:38:23.575 02:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:23.575 { 00:38:23.575 "name": "4092d8cb-47b5-4ed6-a0fc-a303a158f684", 00:38:23.575 "aliases": [ 00:38:23.575 "lvs/nvme0n1p0" 00:38:23.575 ], 00:38:23.575 "product_name": "Logical Volume", 00:38:23.575 "block_size": 4096, 00:38:23.575 "num_blocks": 26476544, 00:38:23.575 "uuid": "4092d8cb-47b5-4ed6-a0fc-a303a158f684", 00:38:23.575 "assigned_rate_limits": { 00:38:23.575 "rw_ios_per_sec": 0, 00:38:23.575 "rw_mbytes_per_sec": 0, 00:38:23.575 "r_mbytes_per_sec": 0, 00:38:23.575 "w_mbytes_per_sec": 0 00:38:23.575 }, 00:38:23.575 "claimed": false, 00:38:23.575 "zoned": false, 00:38:23.575 "supported_io_types": { 00:38:23.575 "read": true, 00:38:23.575 "write": true, 00:38:23.575 "unmap": true, 00:38:23.575 "flush": false, 00:38:23.575 "reset": true, 00:38:23.575 "nvme_admin": false, 00:38:23.575 "nvme_io": false, 00:38:23.575 "nvme_io_md": false, 00:38:23.575 "write_zeroes": true, 00:38:23.575 "zcopy": false, 00:38:23.575 "get_zone_info": false, 00:38:23.575 "zone_management": false, 00:38:23.575 "zone_append": false, 00:38:23.575 "compare": false, 00:38:23.575 "compare_and_write": false, 00:38:23.575 "abort": false, 00:38:23.575 "seek_hole": true, 00:38:23.575 "seek_data": true, 00:38:23.575 "copy": false, 00:38:23.575 "nvme_iov_md": false 00:38:23.575 }, 00:38:23.575 "driver_specific": { 00:38:23.575 "lvol": { 00:38:23.575 "lvol_store_uuid": "f23b9cde-f106-472f-bd47-120e9b6f9b2e", 00:38:23.575 "base_bdev": "nvme0n1", 00:38:23.575 "thin_provision": true, 00:38:23.575 "num_allocated_clusters": 0, 00:38:23.575 "snapshot": false, 00:38:23.575 "clone": false, 00:38:23.575 "esnap_clone": false 00:38:23.575 } 00:38:23.575 } 00:38:23.575 } 00:38:23.575 ]' 00:38:23.575 02:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:23.575 02:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:38:23.575 02:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:23.575 02:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:38:23.575 02:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:38:23.575 02:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:38:23.575 02:07:32 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:38:23.575 02:07:32 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:38:23.575 02:07:32 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4092d8cb-47b5-4ed6-a0fc-a303a158f684 -c nvc0n1p0 --l2p_dram_limit 60 00:38:23.834 [2024-10-15 02:07:32.622948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.834 [2024-10-15 02:07:32.623020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:23.834 [2024-10-15 02:07:32.623043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:23.834 [2024-10-15 02:07:32.623065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.834 [2024-10-15 02:07:32.623146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.834 [2024-10-15 02:07:32.623164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:23.834 [2024-10-15 02:07:32.623177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:38:23.834 [2024-10-15 02:07:32.623193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.834 [2024-10-15 02:07:32.623247] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:23.834 [2024-10-15 02:07:32.624289] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:23.834 [2024-10-15 02:07:32.624329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.834 [2024-10-15 02:07:32.624347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:23.834 [2024-10-15 02:07:32.624360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:38:23.834 [2024-10-15 02:07:32.624377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.834 [2024-10-15 02:07:32.624559] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID edf54476-5576-4040-a580-c74a889e26fe 00:38:23.835 [2024-10-15 02:07:32.626513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.835 [2024-10-15 02:07:32.626552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:38:23.835 [2024-10-15 02:07:32.626573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:38:23.835 [2024-10-15 02:07:32.626584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.835 [2024-10-15 02:07:32.636204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.835 [2024-10-15 02:07:32.636248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:23.835 [2024-10-15 02:07:32.636284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.521 ms 00:38:23.835 [2024-10-15 02:07:32.636295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.835 [2024-10-15 02:07:32.636488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.835 [2024-10-15 02:07:32.636510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:23.835 [2024-10-15 02:07:32.636544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:38:23.835 [2024-10-15 02:07:32.636555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.835 [2024-10-15 02:07:32.636648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.835 [2024-10-15 02:07:32.636666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:23.835 [2024-10-15 02:07:32.636681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:38:23.835 [2024-10-15 02:07:32.636693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.835 [2024-10-15 02:07:32.636737] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:23.835 [2024-10-15 02:07:32.641676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.835 [2024-10-15 02:07:32.641732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:23.835 [2024-10-15 02:07:32.641749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.951 ms 00:38:23.835 [2024-10-15 02:07:32.641765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.835 [2024-10-15 02:07:32.641816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.835 [2024-10-15 02:07:32.641835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:23.835 [2024-10-15 02:07:32.641847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:38:23.835 [2024-10-15 02:07:32.641864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.835 [2024-10-15 02:07:32.641912] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:38:23.835 [2024-10-15 02:07:32.642070] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:23.835 [2024-10-15 02:07:32.642091] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:23.835 [2024-10-15 02:07:32.642109] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:23.835 [2024-10-15 02:07:32.642126] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:23.835 [2024-10-15 02:07:32.642141] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:23.835 [2024-10-15 02:07:32.642152] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:23.835 [2024-10-15 02:07:32.642165] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:23.835 [2024-10-15 02:07:32.642175] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:23.835 [2024-10-15 02:07:32.642187] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:23.835 [2024-10-15 02:07:32.642198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.835 [2024-10-15 02:07:32.642217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:23.835 [2024-10-15 02:07:32.642230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:38:23.835 [2024-10-15 02:07:32.642246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.835 [2024-10-15 02:07:32.642343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.835 [2024-10-15 02:07:32.642376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:23.835 [2024-10-15 02:07:32.642388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:38:23.835 [2024-10-15 02:07:32.642420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.835 [2024-10-15 02:07:32.642587] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:23.835 [2024-10-15 02:07:32.642613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:23.835 [2024-10-15 02:07:32.642627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:23.835 [2024-10-15 02:07:32.642645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:23.835 [2024-10-15 02:07:32.642658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:23.835 [2024-10-15 02:07:32.642671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:23.835 [2024-10-15 02:07:32.642682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:23.835 [2024-10-15 02:07:32.642695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:23.835 [2024-10-15 02:07:32.642706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:23.835 [2024-10-15 02:07:32.642724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:23.835 [2024-10-15 02:07:32.642735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:23.835 [2024-10-15 02:07:32.642747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:23.835 [2024-10-15 02:07:32.642758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:23.835 [2024-10-15 02:07:32.642773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:23.835 [2024-10-15 02:07:32.642784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:23.835 [2024-10-15 02:07:32.642797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:23.835 [2024-10-15 02:07:32.642823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:23.835 [2024-10-15 02:07:32.642868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:23.835 [2024-10-15 02:07:32.642878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:23.835 [2024-10-15 02:07:32.642892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:23.835 [2024-10-15 02:07:32.642919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:23.835 [2024-10-15 02:07:32.642935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:23.835 [2024-10-15 02:07:32.642947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:23.835 [2024-10-15 02:07:32.642975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:23.835 [2024-10-15 02:07:32.642986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:23.835 [2024-10-15 02:07:32.642999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:23.835 [2024-10-15 02:07:32.643010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:23.835 [2024-10-15 02:07:32.643023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:23.835 [2024-10-15 02:07:32.643034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:23.835 [2024-10-15 02:07:32.643050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:23.835 [2024-10-15 02:07:32.643060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:23.835 [2024-10-15 02:07:32.643073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:23.835 [2024-10-15 02:07:32.643085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:23.835 [2024-10-15 02:07:32.643098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:23.835 [2024-10-15 02:07:32.643109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:23.835 [2024-10-15 02:07:32.643122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:23.835 [2024-10-15 02:07:32.643153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:23.835 [2024-10-15 02:07:32.643167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:23.835 [2024-10-15 02:07:32.643179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:23.835 [2024-10-15 02:07:32.643192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:23.835 [2024-10-15 02:07:32.643203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:23.835 [2024-10-15 02:07:32.643222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:23.835 [2024-10-15 02:07:32.643233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:23.835 [2024-10-15 02:07:32.643246] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:23.835 [2024-10-15 02:07:32.643262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:23.835 [2024-10-15 02:07:32.643284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:23.835 [2024-10-15 02:07:32.643295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:23.835 [2024-10-15 02:07:32.643309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:23.835 [2024-10-15 02:07:32.643321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:23.835 [2024-10-15 02:07:32.643334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:23.835 [2024-10-15 02:07:32.643345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:23.835 [2024-10-15 02:07:32.643359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:23.835 [2024-10-15 02:07:32.643371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:23.835 [2024-10-15 02:07:32.643389] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:23.835 [2024-10-15 02:07:32.643418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:23.835 [2024-10-15 02:07:32.643438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:23.835 [2024-10-15 02:07:32.643450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:23.835 [2024-10-15 02:07:32.643464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:23.835 [2024-10-15 02:07:32.643475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:23.835 [2024-10-15 02:07:32.643489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:23.835 [2024-10-15 02:07:32.643500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:23.835 [2024-10-15 02:07:32.643516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:23.835 [2024-10-15 02:07:32.643528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:23.835 [2024-10-15 02:07:32.643541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:23.835 [2024-10-15 02:07:32.643552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:23.835 [2024-10-15 02:07:32.643566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:23.835 [2024-10-15 02:07:32.643577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:23.835 [2024-10-15 02:07:32.643593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:23.835 [2024-10-15 02:07:32.643605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:23.835 [2024-10-15 02:07:32.643619] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:23.835 [2024-10-15 02:07:32.643632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:23.835 [2024-10-15 02:07:32.643647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:23.835 [2024-10-15 02:07:32.643658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:23.835 [2024-10-15 02:07:32.643677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:23.835 [2024-10-15 02:07:32.643688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:23.835 [2024-10-15 02:07:32.643704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:23.835 [2024-10-15 02:07:32.643716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:23.835 [2024-10-15 02:07:32.643733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:38:23.835 [2024-10-15 02:07:32.643745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:23.835 [2024-10-15 02:07:32.643829] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:38:23.835 [2024-10-15 02:07:32.643851] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:38:28.018 [2024-10-15 02:07:36.483138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.483211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:38:28.019 [2024-10-15 02:07:36.483268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3839.328 ms 00:38:28.019 [2024-10-15 02:07:36.483281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.529215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.529483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:28.019 [2024-10-15 02:07:36.529537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.619 ms 00:38:28.019 [2024-10-15 02:07:36.529563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.529798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.529825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:28.019 [2024-10-15 02:07:36.529842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:38:28.019 [2024-10-15 02:07:36.529853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.572574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.572624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:28.019 [2024-10-15 02:07:36.572663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.647 ms 00:38:28.019 [2024-10-15 02:07:36.572675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.572726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.572741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:28.019 [2024-10-15 02:07:36.572758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:38:28.019 [2024-10-15 02:07:36.572769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.573390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.573441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:28.019 [2024-10-15 02:07:36.573498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:38:28.019 [2024-10-15 02:07:36.573527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.573736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.573761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:28.019 [2024-10-15 02:07:36.573778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:38:28.019 [2024-10-15 02:07:36.573793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.594762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.595024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:28.019 [2024-10-15 02:07:36.595064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.912 ms 00:38:28.019 [2024-10-15 02:07:36.595077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.608179] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:38:28.019 [2024-10-15 02:07:36.628965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.629050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:28.019 [2024-10-15 02:07:36.629090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.766 ms 00:38:28.019 [2024-10-15 02:07:36.629106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.691339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.691458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:38:28.019 [2024-10-15 02:07:36.691483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.175 ms 00:38:28.019 [2024-10-15 02:07:36.691502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.691764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.691788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:28.019 [2024-10-15 02:07:36.691820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:38:28.019 [2024-10-15 02:07:36.691848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.718716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.718764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:38:28.019 [2024-10-15 02:07:36.718782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.767 ms 00:38:28.019 [2024-10-15 02:07:36.718796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.745160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.745238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:38:28.019 [2024-10-15 02:07:36.745274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.287 ms 00:38:28.019 [2024-10-15 02:07:36.745288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.746328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.746597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:28.019 [2024-10-15 02:07:36.746627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:38:28.019 [2024-10-15 02:07:36.746648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.826824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.826881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:38:28.019 [2024-10-15 02:07:36.826900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.089 ms 00:38:28.019 [2024-10-15 02:07:36.826914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.855406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.855688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:38:28.019 [2024-10-15 02:07:36.855719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.385 ms 00:38:28.019 [2024-10-15 02:07:36.855737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.882554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.882760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:38:28.019 [2024-10-15 02:07:36.882789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.758 ms 00:38:28.019 [2024-10-15 02:07:36.882805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.909959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.910021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:28.019 [2024-10-15 02:07:36.910039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.069 ms 00:38:28.019 [2024-10-15 02:07:36.910062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.910121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.910141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:28.019 [2024-10-15 02:07:36.910154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:38:28.019 [2024-10-15 02:07:36.910167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.910347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.019 [2024-10-15 02:07:36.910372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:28.019 [2024-10-15 02:07:36.910395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:38:28.019 [2024-10-15 02:07:36.910466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.019 [2024-10-15 02:07:36.912087] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4288.520 ms, result 0 00:38:28.019 { 00:38:28.019 "name": "ftl0", 00:38:28.019 "uuid": "edf54476-5576-4040-a580-c74a889e26fe" 00:38:28.019 } 00:38:28.019 02:07:36 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:38:28.019 02:07:36 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:38:28.019 02:07:36 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:28.019 02:07:36 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:38:28.019 02:07:36 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:28.019 02:07:36 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:28.019 02:07:36 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:28.278 02:07:37 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:38:28.537 [ 00:38:28.537 { 00:38:28.537 "name": "ftl0", 00:38:28.537 "aliases": [ 00:38:28.537 "edf54476-5576-4040-a580-c74a889e26fe" 00:38:28.537 ], 00:38:28.537 "product_name": "FTL disk", 00:38:28.537 "block_size": 4096, 00:38:28.537 "num_blocks": 20971520, 00:38:28.537 "uuid": "edf54476-5576-4040-a580-c74a889e26fe", 00:38:28.537 "assigned_rate_limits": { 00:38:28.537 "rw_ios_per_sec": 0, 00:38:28.537 "rw_mbytes_per_sec": 0, 00:38:28.537 "r_mbytes_per_sec": 0, 00:38:28.537 "w_mbytes_per_sec": 0 00:38:28.537 }, 00:38:28.537 "claimed": false, 00:38:28.537 "zoned": false, 00:38:28.537 "supported_io_types": { 00:38:28.537 "read": true, 00:38:28.537 "write": true, 00:38:28.537 "unmap": true, 00:38:28.537 "flush": true, 00:38:28.537 "reset": false, 00:38:28.537 "nvme_admin": false, 00:38:28.537 "nvme_io": false, 00:38:28.537 "nvme_io_md": false, 00:38:28.537 "write_zeroes": true, 00:38:28.537 "zcopy": false, 00:38:28.537 "get_zone_info": false, 00:38:28.537 "zone_management": false, 00:38:28.537 "zone_append": false, 00:38:28.537 "compare": false, 00:38:28.537 "compare_and_write": false, 00:38:28.537 "abort": false, 00:38:28.537 "seek_hole": false, 00:38:28.537 "seek_data": false, 00:38:28.537 "copy": false, 00:38:28.537 "nvme_iov_md": false 00:38:28.537 }, 00:38:28.537 "driver_specific": { 00:38:28.537 "ftl": { 00:38:28.537 "base_bdev": "4092d8cb-47b5-4ed6-a0fc-a303a158f684", 00:38:28.537 "cache": "nvc0n1p0" 00:38:28.537 } 00:38:28.537 } 00:38:28.537 } 00:38:28.537 ] 00:38:28.537 02:07:37 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:38:28.537 02:07:37 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:38:28.537 02:07:37 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:38:28.796 02:07:37 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:38:28.796 02:07:37 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:38:29.054 [2024-10-15 02:07:37.904436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.054 [2024-10-15 02:07:37.904488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:29.054 [2024-10-15 02:07:37.904528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:29.054 [2024-10-15 02:07:37.904540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.054 [2024-10-15 02:07:37.904590] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:29.054 [2024-10-15 02:07:37.908026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.054 [2024-10-15 02:07:37.908084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:29.054 [2024-10-15 02:07:37.908099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.413 ms 00:38:29.054 [2024-10-15 02:07:37.908116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.054 [2024-10-15 02:07:37.908616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.054 [2024-10-15 02:07:37.908655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:29.054 [2024-10-15 02:07:37.908671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:38:29.054 [2024-10-15 02:07:37.908688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.054 [2024-10-15 02:07:37.911611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.054 [2024-10-15 02:07:37.911654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:29.054 [2024-10-15 02:07:37.911669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.896 ms 00:38:29.054 [2024-10-15 02:07:37.911682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.054 [2024-10-15 02:07:37.917367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.054 [2024-10-15 02:07:37.917434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:29.054 [2024-10-15 02:07:37.917466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.656 ms 00:38:29.054 [2024-10-15 02:07:37.917479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.054 [2024-10-15 02:07:37.944749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.054 [2024-10-15 02:07:37.944978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:29.054 [2024-10-15 02:07:37.945008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.165 ms 00:38:29.054 [2024-10-15 02:07:37.945023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.054 [2024-10-15 02:07:37.962284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.054 [2024-10-15 02:07:37.962358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:29.054 [2024-10-15 02:07:37.962377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.204 ms 00:38:29.054 [2024-10-15 02:07:37.962390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.054 [2024-10-15 02:07:37.962705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.054 [2024-10-15 02:07:37.962734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:29.054 [2024-10-15 02:07:37.962748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:38:29.054 [2024-10-15 02:07:37.962765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.054 [2024-10-15 02:07:37.989498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.054 [2024-10-15 02:07:37.989558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:29.054 [2024-10-15 02:07:37.989575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.703 ms 00:38:29.054 [2024-10-15 02:07:37.989588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.054 [2024-10-15 02:07:38.015997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.054 [2024-10-15 02:07:38.016060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:38:29.054 [2024-10-15 02:07:38.016092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.357 ms 00:38:29.054 [2024-10-15 02:07:38.016105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.054 [2024-10-15 02:07:38.042075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.054 [2024-10-15 02:07:38.042136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:29.054 [2024-10-15 02:07:38.042153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.919 ms 00:38:29.054 [2024-10-15 02:07:38.042165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.314 [2024-10-15 02:07:38.069229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.314 [2024-10-15 02:07:38.069291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:29.314 [2024-10-15 02:07:38.069308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.935 ms 00:38:29.314 [2024-10-15 02:07:38.069321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.314 [2024-10-15 02:07:38.069370] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:29.314 [2024-10-15 02:07:38.069413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.069986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:29.314 [2024-10-15 02:07:38.070858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:29.315 [2024-10-15 02:07:38.070870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:29.315 [2024-10-15 02:07:38.070886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:29.315 [2024-10-15 02:07:38.070898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:29.315 [2024-10-15 02:07:38.070938] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:29.315 [2024-10-15 02:07:38.070949] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: edf54476-5576-4040-a580-c74a889e26fe 00:38:29.315 [2024-10-15 02:07:38.070963] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:29.315 [2024-10-15 02:07:38.070973] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:29.315 [2024-10-15 02:07:38.070986] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:29.315 [2024-10-15 02:07:38.070997] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:29.315 [2024-10-15 02:07:38.071010] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:29.315 [2024-10-15 02:07:38.071021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:29.315 [2024-10-15 02:07:38.071034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:29.315 [2024-10-15 02:07:38.071044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:29.315 [2024-10-15 02:07:38.071056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:29.315 [2024-10-15 02:07:38.071068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.315 [2024-10-15 02:07:38.071081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:29.315 [2024-10-15 02:07:38.071096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:38:29.315 [2024-10-15 02:07:38.071109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.086449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.315 [2024-10-15 02:07:38.086542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:29.315 [2024-10-15 02:07:38.086589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.231 ms 00:38:29.315 [2024-10-15 02:07:38.086604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.087131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.315 [2024-10-15 02:07:38.087185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:29.315 [2024-10-15 02:07:38.087200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:38:29.315 [2024-10-15 02:07:38.087214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.138930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.138996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:29.315 [2024-10-15 02:07:38.139029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.139043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.139108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.139129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:29.315 [2024-10-15 02:07:38.139140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.139153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.139268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.139295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:29.315 [2024-10-15 02:07:38.139307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.139319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.139352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.139368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:29.315 [2024-10-15 02:07:38.139382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.139394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.241176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.241251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:29.315 [2024-10-15 02:07:38.241271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.241285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.315249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.315551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:29.315 [2024-10-15 02:07:38.315581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.315599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.315731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.315758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:29.315 [2024-10-15 02:07:38.315771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.315785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.315873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.315909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:29.315 [2024-10-15 02:07:38.315922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.315939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.316091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.316115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:29.315 [2024-10-15 02:07:38.316142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.316155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.316215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.316237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:29.315 [2024-10-15 02:07:38.316249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.316261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.316320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.316340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:29.315 [2024-10-15 02:07:38.316350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.316363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.316441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.315 [2024-10-15 02:07:38.316463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:29.315 [2024-10-15 02:07:38.316475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.315 [2024-10-15 02:07:38.316491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.315 [2024-10-15 02:07:38.316675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.232 ms, result 0 00:38:29.315 [2024-10-15 02:07:38.317686] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035039da0 was disconnected and freed. delete nvme_qpair. 00:38:29.315 true 00:38:29.574 02:07:38 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74671 00:38:29.574 02:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 74671 ']' 00:38:29.574 02:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 74671 00:38:29.574 02:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:38:29.574 02:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:29.574 02:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74671 00:38:29.574 killing process with pid 74671 00:38:29.574 02:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:29.574 02:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:29.574 02:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74671' 00:38:29.574 02:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 74671 00:38:29.574 02:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 74671 00:38:30.511 [2024-10-15 02:07:39.267293] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200036416920 was disconnected and freed. delete nvme_qpair. 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:34.700 02:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:38:34.700 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:38:34.700 fio-3.35 00:38:34.700 Starting 1 thread 00:38:39.997 00:38:39.997 test: (groupid=0, jobs=1): err= 0: pid=74895: Tue Oct 15 02:07:48 2024 00:38:39.997 read: IOPS=862, BW=57.3MiB/s (60.0MB/s)(255MiB/4445msec) 00:38:39.997 slat (nsec): min=5289, max=62246, avg=9373.22, stdev=5438.46 00:38:39.997 clat (usec): min=335, max=905, avg=515.98, stdev=67.08 00:38:39.997 lat (usec): min=341, max=928, avg=525.35, stdev=68.92 00:38:39.997 clat percentiles (usec): 00:38:39.997 | 1.00th=[ 404], 5.00th=[ 437], 10.00th=[ 449], 20.00th=[ 461], 00:38:39.997 | 30.00th=[ 469], 40.00th=[ 486], 50.00th=[ 502], 60.00th=[ 523], 00:38:39.997 | 70.00th=[ 545], 80.00th=[ 570], 90.00th=[ 611], 95.00th=[ 644], 00:38:39.997 | 99.00th=[ 717], 99.50th=[ 742], 99.90th=[ 881], 99.95th=[ 898], 00:38:39.997 | 99.99th=[ 906] 00:38:39.997 write: IOPS=868, BW=57.7MiB/s (60.5MB/s)(256MiB/4440msec); 0 zone resets 00:38:39.997 slat (usec): min=17, max=129, avg=25.47, stdev= 8.20 00:38:39.997 clat (usec): min=377, max=1222, avg=591.62, stdev=73.09 00:38:39.997 lat (usec): min=398, max=1245, avg=617.09, stdev=74.50 00:38:39.997 clat percentiles (usec): 00:38:39.997 | 1.00th=[ 457], 5.00th=[ 498], 10.00th=[ 519], 20.00th=[ 537], 00:38:39.997 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 603], 00:38:39.997 | 70.00th=[ 619], 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 709], 00:38:39.997 | 99.00th=[ 848], 99.50th=[ 914], 99.90th=[ 1004], 99.95th=[ 1090], 00:38:39.997 | 99.99th=[ 1221] 00:38:39.997 bw ( KiB/s): min=55080, max=62424, per=100.00%, avg=59398.00, stdev=3241.05, samples=8 00:38:39.997 iops : min= 810, max= 918, avg=873.50, stdev=47.66, samples=8 00:38:39.997 lat (usec) : 500=26.95%, 750=71.39%, 1000=1.61% 00:38:39.997 lat (msec) : 2=0.05% 00:38:39.997 cpu : usr=98.76%, sys=0.18%, ctx=9, majf=0, minf=1169 00:38:39.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:39.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.998 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:39.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:39.998 00:38:39.998 Run status group 0 (all jobs): 00:38:39.998 READ: bw=57.3MiB/s (60.0MB/s), 57.3MiB/s-57.3MiB/s (60.0MB/s-60.0MB/s), io=255MiB (267MB), run=4445-4445msec 00:38:39.998 WRITE: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=256MiB (269MB), run=4440-4440msec 00:38:41.901 ----------------------------------------------------- 00:38:41.901 Suppressions used: 00:38:41.901 count bytes template 00:38:41.901 1 5 /usr/src/fio/parse.c 00:38:41.901 1 8 libtcmalloc_minimal.so 00:38:41.901 1 904 libcrypto.so 00:38:41.901 ----------------------------------------------------- 00:38:41.901 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:41.901 02:07:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:38:41.901 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:38:41.901 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:38:41.901 fio-3.35 00:38:41.901 Starting 2 threads 00:39:13.990 00:39:13.990 first_half: (groupid=0, jobs=1): err= 0: pid=75003: Tue Oct 15 02:08:21 2024 00:39:13.990 read: IOPS=2210, BW=8843KiB/s (9055kB/s)(255MiB/29514msec) 00:39:13.990 slat (nsec): min=3803, max=97727, avg=9957.24, stdev=5221.40 00:39:13.990 clat (usec): min=842, max=327446, avg=43967.33, stdev=22274.61 00:39:13.990 lat (usec): min=852, max=327451, avg=43977.29, stdev=22274.75 00:39:13.990 clat percentiles (msec): 00:39:13.990 | 1.00th=[ 13], 5.00th=[ 31], 10.00th=[ 39], 20.00th=[ 40], 00:39:13.990 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 42], 00:39:13.990 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 46], 95.00th=[ 50], 00:39:13.990 | 99.00th=[ 180], 99.50th=[ 205], 99.90th=[ 251], 99.95th=[ 275], 00:39:13.990 | 99.99th=[ 317] 00:39:13.990 write: IOPS=2619, BW=10.2MiB/s (10.7MB/s)(256MiB/25022msec); 0 zone resets 00:39:13.990 slat (usec): min=4, max=593, avg=10.89, stdev= 8.02 00:39:13.990 clat (usec): min=484, max=99209, avg=13778.36, stdev=22310.27 00:39:13.990 lat (usec): min=501, max=99217, avg=13789.26, stdev=22310.91 00:39:13.990 clat percentiles (usec): 00:39:13.990 | 1.00th=[ 914], 5.00th=[ 1205], 10.00th=[ 1352], 20.00th=[ 1663], 00:39:13.990 | 30.00th=[ 3228], 40.00th=[ 5276], 50.00th=[ 6456], 60.00th=[ 7308], 00:39:13.990 | 70.00th=[ 8848], 80.00th=[13960], 90.00th=[39060], 95.00th=[83362], 00:39:13.990 | 99.00th=[89654], 99.50th=[91751], 99.90th=[95945], 99.95th=[96994], 00:39:13.990 | 99.99th=[98042] 00:39:13.990 bw ( KiB/s): min= 192, max=40080, per=100.00%, avg=20971.52, stdev=12518.12, samples=25 00:39:13.990 iops : min= 48, max=10020, avg=5242.88, stdev=3129.53, samples=25 00:39:13.990 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.81% 00:39:13.990 lat (msec) : 2=11.59%, 4=4.76%, 10=19.77%, 20=8.91%, 50=47.68% 00:39:13.990 lat (msec) : 100=5.15%, 250=1.22%, 500=0.05% 00:39:13.990 cpu : usr=98.61%, sys=0.49%, ctx=46, majf=0, minf=5585 00:39:13.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:39:13.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.991 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:13.991 issued rwts: total=65245,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:13.991 second_half: (groupid=0, jobs=1): err= 0: pid=75004: Tue Oct 15 02:08:21 2024 00:39:13.991 read: IOPS=2227, BW=8910KiB/s (9123kB/s)(255MiB/29255msec) 00:39:13.991 slat (nsec): min=3783, max=97332, avg=10949.40, stdev=6004.77 00:39:13.991 clat (usec): min=797, max=332988, avg=44709.10, stdev=19871.57 00:39:13.991 lat (usec): min=807, max=332999, avg=44720.05, stdev=19871.70 00:39:13.991 clat percentiles (msec): 00:39:13.991 | 1.00th=[ 7], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:39:13.991 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 43], 00:39:13.991 | 70.00th=[ 43], 80.00th=[ 45], 90.00th=[ 47], 95.00th=[ 54], 00:39:13.991 | 99.00th=[ 159], 99.50th=[ 186], 99.90th=[ 209], 99.95th=[ 215], 00:39:13.991 | 99.99th=[ 326] 00:39:13.991 write: IOPS=3175, BW=12.4MiB/s (13.0MB/s)(256MiB/20641msec); 0 zone resets 00:39:13.991 slat (usec): min=4, max=293, avg=11.27, stdev= 7.50 00:39:13.991 clat (usec): min=420, max=99796, avg=12637.42, stdev=21953.20 00:39:13.991 lat (usec): min=438, max=99804, avg=12648.69, stdev=21953.31 00:39:13.991 clat percentiles (usec): 00:39:13.991 | 1.00th=[ 1004], 5.00th=[ 1254], 10.00th=[ 1385], 20.00th=[ 1614], 00:39:13.991 | 30.00th=[ 2008], 40.00th=[ 3916], 50.00th=[ 5669], 60.00th=[ 7111], 00:39:13.991 | 70.00th=[ 8586], 80.00th=[12911], 90.00th=[20055], 95.00th=[83362], 00:39:13.991 | 99.00th=[89654], 99.50th=[92799], 99.90th=[95945], 99.95th=[96994], 00:39:13.991 | 99.99th=[99091] 00:39:13.991 bw ( KiB/s): min= 512, max=49072, per=100.00%, avg=20971.52, stdev=12266.43, samples=25 00:39:13.991 iops : min= 128, max=12268, avg=5242.88, stdev=3066.61, samples=25 00:39:13.991 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.46% 00:39:13.991 lat (msec) : 2=14.63%, 4=5.56%, 10=16.73%, 20=8.50%, 50=47.33% 00:39:13.991 lat (msec) : 100=5.35%, 250=1.38%, 500=0.01% 00:39:13.991 cpu : usr=98.50%, sys=0.55%, ctx=339, majf=0, minf=5544 00:39:13.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:39:13.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.991 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:13.991 issued rwts: total=65162,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:13.991 00:39:13.991 Run status group 0 (all jobs): 00:39:13.991 READ: bw=17.3MiB/s (18.1MB/s), 8843KiB/s-8910KiB/s (9055kB/s-9123kB/s), io=509MiB (534MB), run=29255-29514msec 00:39:13.991 WRITE: bw=20.5MiB/s (21.5MB/s), 10.2MiB/s-12.4MiB/s (10.7MB/s-13.0MB/s), io=512MiB (537MB), run=20641-25022msec 00:39:14.927 ----------------------------------------------------- 00:39:14.927 Suppressions used: 00:39:14.927 count bytes template 00:39:14.927 2 10 /usr/src/fio/parse.c 00:39:14.927 4 384 /usr/src/fio/iolog.c 00:39:14.927 1 8 libtcmalloc_minimal.so 00:39:14.927 1 904 libcrypto.so 00:39:14.927 ----------------------------------------------------- 00:39:14.927 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:14.927 02:08:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:39:15.185 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:39:15.185 fio-3.35 00:39:15.185 Starting 1 thread 00:39:33.274 00:39:33.274 test: (groupid=0, jobs=1): err= 0: pid=75373: Tue Oct 15 02:08:41 2024 00:39:33.274 read: IOPS=6382, BW=24.9MiB/s (26.1MB/s)(255MiB/10216msec) 00:39:33.274 slat (nsec): min=3858, max=76483, avg=9475.77, stdev=5686.13 00:39:33.274 clat (usec): min=1215, max=38267, avg=20040.68, stdev=1438.44 00:39:33.274 lat (usec): min=1219, max=38276, avg=20050.16, stdev=1438.51 00:39:33.274 clat percentiles (usec): 00:39:33.274 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19006], 20.00th=[19268], 00:39:33.274 | 30.00th=[19530], 40.00th=[19530], 50.00th=[19792], 60.00th=[19792], 00:39:33.274 | 70.00th=[20055], 80.00th=[20317], 90.00th=[20841], 95.00th=[23462], 00:39:33.274 | 99.00th=[25560], 99.50th=[25822], 99.90th=[29230], 99.95th=[33817], 00:39:33.274 | 99.99th=[37487] 00:39:33.274 write: IOPS=10.9k, BW=42.6MiB/s (44.6MB/s)(256MiB/6012msec); 0 zone resets 00:39:33.274 slat (usec): min=4, max=706, avg=11.89, stdev= 9.72 00:39:33.274 clat (usec): min=707, max=64444, avg=11672.78, stdev=14628.88 00:39:33.274 lat (usec): min=713, max=64458, avg=11684.66, stdev=14629.05 00:39:33.274 clat percentiles (usec): 00:39:33.274 | 1.00th=[ 1057], 5.00th=[ 1254], 10.00th=[ 1369], 20.00th=[ 1532], 00:39:33.274 | 30.00th=[ 1696], 40.00th=[ 2147], 50.00th=[ 7832], 60.00th=[ 8848], 00:39:33.274 | 70.00th=[10290], 80.00th=[11994], 90.00th=[43254], 95.00th=[45351], 00:39:33.274 | 99.00th=[49021], 99.50th=[50594], 99.90th=[52691], 99.95th=[54264], 00:39:33.274 | 99.99th=[62653] 00:39:33.274 bw ( KiB/s): min= 1016, max=62992, per=92.49%, avg=40329.23, stdev=14394.63, samples=13 00:39:33.274 iops : min= 254, max=15748, avg=10082.46, stdev=3598.63, samples=13 00:39:33.274 lat (usec) : 750=0.01%, 1000=0.30% 00:39:33.274 lat (msec) : 2=19.18%, 4=1.35%, 10=13.41%, 20=40.64%, 50=24.82% 00:39:33.274 lat (msec) : 100=0.30% 00:39:33.274 cpu : usr=97.49%, sys=1.40%, ctx=28, majf=0, minf=5565 00:39:33.274 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:39:33.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.274 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:33.274 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.274 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:33.274 00:39:33.274 Run status group 0 (all jobs): 00:39:33.274 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=255MiB (267MB), run=10216-10216msec 00:39:33.274 WRITE: bw=42.6MiB/s (44.6MB/s), 42.6MiB/s-42.6MiB/s (44.6MB/s-44.6MB/s), io=256MiB (268MB), run=6012-6012msec 00:39:34.210 ----------------------------------------------------- 00:39:34.210 Suppressions used: 00:39:34.210 count bytes template 00:39:34.210 1 5 /usr/src/fio/parse.c 00:39:34.210 2 192 /usr/src/fio/iolog.c 00:39:34.210 1 8 libtcmalloc_minimal.so 00:39:34.210 1 904 libcrypto.so 00:39:34.210 ----------------------------------------------------- 00:39:34.210 00:39:34.210 02:08:43 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:39:34.210 02:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:34.210 02:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:39:34.210 02:08:43 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:34.210 Remove shared memory files 00:39:34.210 02:08:43 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:39:34.210 02:08:43 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:39:34.210 02:08:43 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:39:34.210 02:08:43 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:39:34.469 02:08:43 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58321 /dev/shm/spdk_tgt_trace.pid73577 00:39:34.469 02:08:43 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:39:34.469 02:08:43 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:39:34.469 ************************************ 00:39:34.469 END TEST ftl_fio_basic 00:39:34.469 ************************************ 00:39:34.469 00:39:34.469 real 1m15.596s 00:39:34.469 user 2m48.406s 00:39:34.469 sys 0m4.513s 00:39:34.469 02:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:34.469 02:08:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:39:34.469 02:08:43 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:39:34.469 02:08:43 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:34.469 02:08:43 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:34.469 02:08:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:34.469 ************************************ 00:39:34.469 START TEST ftl_bdevperf 00:39:34.469 ************************************ 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:39:34.469 * Looking for test storage... 00:39:34.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:34.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.469 --rc genhtml_branch_coverage=1 00:39:34.469 --rc genhtml_function_coverage=1 00:39:34.469 --rc genhtml_legend=1 00:39:34.469 --rc geninfo_all_blocks=1 00:39:34.469 --rc geninfo_unexecuted_blocks=1 00:39:34.469 00:39:34.469 ' 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:34.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.469 --rc genhtml_branch_coverage=1 00:39:34.469 --rc genhtml_function_coverage=1 00:39:34.469 --rc genhtml_legend=1 00:39:34.469 --rc geninfo_all_blocks=1 00:39:34.469 --rc geninfo_unexecuted_blocks=1 00:39:34.469 00:39:34.469 ' 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:34.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.469 --rc genhtml_branch_coverage=1 00:39:34.469 --rc genhtml_function_coverage=1 00:39:34.469 --rc genhtml_legend=1 00:39:34.469 --rc geninfo_all_blocks=1 00:39:34.469 --rc geninfo_unexecuted_blocks=1 00:39:34.469 00:39:34.469 ' 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:34.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.469 --rc genhtml_branch_coverage=1 00:39:34.469 --rc genhtml_function_coverage=1 00:39:34.469 --rc genhtml_legend=1 00:39:34.469 --rc geninfo_all_blocks=1 00:39:34.469 --rc geninfo_unexecuted_blocks=1 00:39:34.469 00:39:34.469 ' 00:39:34.469 02:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75639 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75639 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 75639 ']' 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:34.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:34.729 02:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:34.729 [2024-10-15 02:08:43.617553] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:39:34.729 [2024-10-15 02:08:43.618035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75639 ] 00:39:34.988 [2024-10-15 02:08:43.790909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.246 [2024-10-15 02:08:44.001043] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:35.814 02:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:35.814 02:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:39:35.814 02:08:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:39:35.814 02:08:44 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:39:35.814 02:08:44 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:39:35.814 02:08:44 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:39:35.814 02:08:44 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:39:35.814 02:08:44 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:39:36.072 02:08:44 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:39:36.072 02:08:44 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:39:36.072 02:08:44 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:39:36.072 02:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:39:36.072 02:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:39:36.072 02:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:39:36.072 02:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:39:36.073 02:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:39:36.331 02:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:39:36.331 { 00:39:36.331 "name": "nvme0n1", 00:39:36.331 "aliases": [ 00:39:36.331 "0f6f2933-0596-4eb8-beb8-f2f513f6bab8" 00:39:36.331 ], 00:39:36.331 "product_name": "NVMe disk", 00:39:36.331 "block_size": 4096, 00:39:36.331 "num_blocks": 1310720, 00:39:36.331 "uuid": "0f6f2933-0596-4eb8-beb8-f2f513f6bab8", 00:39:36.331 "numa_id": -1, 00:39:36.331 "assigned_rate_limits": { 00:39:36.331 "rw_ios_per_sec": 0, 00:39:36.331 "rw_mbytes_per_sec": 0, 00:39:36.331 "r_mbytes_per_sec": 0, 00:39:36.331 "w_mbytes_per_sec": 0 00:39:36.331 }, 00:39:36.331 "claimed": true, 00:39:36.331 "claim_type": "read_many_write_one", 00:39:36.331 "zoned": false, 00:39:36.331 "supported_io_types": { 00:39:36.331 "read": true, 00:39:36.331 "write": true, 00:39:36.331 "unmap": true, 00:39:36.331 "flush": true, 00:39:36.331 "reset": true, 00:39:36.331 "nvme_admin": true, 00:39:36.331 "nvme_io": true, 00:39:36.331 "nvme_io_md": false, 00:39:36.331 "write_zeroes": true, 00:39:36.331 "zcopy": false, 00:39:36.331 "get_zone_info": false, 00:39:36.331 "zone_management": false, 00:39:36.331 "zone_append": false, 00:39:36.331 "compare": true, 00:39:36.331 "compare_and_write": false, 00:39:36.331 "abort": true, 00:39:36.331 "seek_hole": false, 00:39:36.331 "seek_data": false, 00:39:36.331 "copy": true, 00:39:36.331 "nvme_iov_md": false 00:39:36.331 }, 00:39:36.331 "driver_specific": { 00:39:36.331 "nvme": [ 00:39:36.331 { 00:39:36.331 "pci_address": "0000:00:11.0", 00:39:36.331 "trid": { 00:39:36.331 "trtype": "PCIe", 00:39:36.331 "traddr": "0000:00:11.0" 00:39:36.331 }, 00:39:36.331 "ctrlr_data": { 00:39:36.331 "cntlid": 0, 00:39:36.331 "vendor_id": "0x1b36", 00:39:36.331 "model_number": "QEMU NVMe Ctrl", 00:39:36.331 "serial_number": "12341", 00:39:36.331 "firmware_revision": "8.0.0", 00:39:36.331 "subnqn": "nqn.2019-08.org.qemu:12341", 00:39:36.332 "oacs": { 00:39:36.332 "security": 0, 00:39:36.332 "format": 1, 00:39:36.332 "firmware": 0, 00:39:36.332 "ns_manage": 1 00:39:36.332 }, 00:39:36.332 "multi_ctrlr": false, 00:39:36.332 "ana_reporting": false 00:39:36.332 }, 00:39:36.332 "vs": { 00:39:36.332 "nvme_version": "1.4" 00:39:36.332 }, 00:39:36.332 "ns_data": { 00:39:36.332 "id": 1, 00:39:36.332 "can_share": false 00:39:36.332 } 00:39:36.332 } 00:39:36.332 ], 00:39:36.332 "mp_policy": "active_passive" 00:39:36.332 } 00:39:36.332 } 00:39:36.332 ]' 00:39:36.332 02:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:39:36.332 02:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:39:36.332 02:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:39:36.332 02:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:39:36.332 02:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:39:36.332 02:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:39:36.332 02:08:45 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:39:36.332 02:08:45 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:39:36.332 02:08:45 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:39:36.332 02:08:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:39:36.332 02:08:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:39:36.590 02:08:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=f23b9cde-f106-472f-bd47-120e9b6f9b2e 00:39:36.590 02:08:45 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:39:36.590 02:08:45 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f23b9cde-f106-472f-bd47-120e9b6f9b2e 00:39:36.849 [2024-10-15 02:08:45.649543] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:39:36.849 02:08:45 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:39:37.108 02:08:45 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=9b0b13bc-e9f2-47c3-aa2f-09c9e4cb862c 00:39:37.108 02:08:45 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9b0b13bc-e9f2-47c3-aa2f-09c9e4cb862c 00:39:37.366 02:08:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=f05b6321-1ac0-406e-be33-4f7668692978 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f05b6321-1ac0-406e-be33-4f7668692978 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=f05b6321-1ac0-406e-be33-4f7668692978 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size f05b6321-1ac0-406e-be33-4f7668692978 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=f05b6321-1ac0-406e-be33-4f7668692978 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f05b6321-1ac0-406e-be33-4f7668692978 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:39:37.367 { 00:39:37.367 "name": "f05b6321-1ac0-406e-be33-4f7668692978", 00:39:37.367 "aliases": [ 00:39:37.367 "lvs/nvme0n1p0" 00:39:37.367 ], 00:39:37.367 "product_name": "Logical Volume", 00:39:37.367 "block_size": 4096, 00:39:37.367 "num_blocks": 26476544, 00:39:37.367 "uuid": "f05b6321-1ac0-406e-be33-4f7668692978", 00:39:37.367 "assigned_rate_limits": { 00:39:37.367 "rw_ios_per_sec": 0, 00:39:37.367 "rw_mbytes_per_sec": 0, 00:39:37.367 "r_mbytes_per_sec": 0, 00:39:37.367 "w_mbytes_per_sec": 0 00:39:37.367 }, 00:39:37.367 "claimed": false, 00:39:37.367 "zoned": false, 00:39:37.367 "supported_io_types": { 00:39:37.367 "read": true, 00:39:37.367 "write": true, 00:39:37.367 "unmap": true, 00:39:37.367 "flush": false, 00:39:37.367 "reset": true, 00:39:37.367 "nvme_admin": false, 00:39:37.367 "nvme_io": false, 00:39:37.367 "nvme_io_md": false, 00:39:37.367 "write_zeroes": true, 00:39:37.367 "zcopy": false, 00:39:37.367 "get_zone_info": false, 00:39:37.367 "zone_management": false, 00:39:37.367 "zone_append": false, 00:39:37.367 "compare": false, 00:39:37.367 "compare_and_write": false, 00:39:37.367 "abort": false, 00:39:37.367 "seek_hole": true, 00:39:37.367 "seek_data": true, 00:39:37.367 "copy": false, 00:39:37.367 "nvme_iov_md": false 00:39:37.367 }, 00:39:37.367 "driver_specific": { 00:39:37.367 "lvol": { 00:39:37.367 "lvol_store_uuid": "9b0b13bc-e9f2-47c3-aa2f-09c9e4cb862c", 00:39:37.367 "base_bdev": "nvme0n1", 00:39:37.367 "thin_provision": true, 00:39:37.367 "num_allocated_clusters": 0, 00:39:37.367 "snapshot": false, 00:39:37.367 "clone": false, 00:39:37.367 "esnap_clone": false 00:39:37.367 } 00:39:37.367 } 00:39:37.367 } 00:39:37.367 ]' 00:39:37.367 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:39:37.626 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:39:37.626 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:39:37.626 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:39:37.626 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:39:37.626 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:39:37.626 02:08:46 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:39:37.626 02:08:46 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:39:37.626 02:08:46 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:39:37.884 [2024-10-15 02:08:46.697983] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:39:37.884 02:08:46 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:39:37.884 02:08:46 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:39:37.884 02:08:46 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size f05b6321-1ac0-406e-be33-4f7668692978 00:39:37.884 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=f05b6321-1ac0-406e-be33-4f7668692978 00:39:37.884 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:39:37.884 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:39:37.884 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:39:37.884 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f05b6321-1ac0-406e-be33-4f7668692978 00:39:38.144 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:39:38.144 { 00:39:38.144 "name": "f05b6321-1ac0-406e-be33-4f7668692978", 00:39:38.144 "aliases": [ 00:39:38.144 "lvs/nvme0n1p0" 00:39:38.144 ], 00:39:38.144 "product_name": "Logical Volume", 00:39:38.144 "block_size": 4096, 00:39:38.144 "num_blocks": 26476544, 00:39:38.144 "uuid": "f05b6321-1ac0-406e-be33-4f7668692978", 00:39:38.144 "assigned_rate_limits": { 00:39:38.144 "rw_ios_per_sec": 0, 00:39:38.144 "rw_mbytes_per_sec": 0, 00:39:38.144 "r_mbytes_per_sec": 0, 00:39:38.144 "w_mbytes_per_sec": 0 00:39:38.144 }, 00:39:38.144 "claimed": false, 00:39:38.144 "zoned": false, 00:39:38.144 "supported_io_types": { 00:39:38.144 "read": true, 00:39:38.144 "write": true, 00:39:38.144 "unmap": true, 00:39:38.144 "flush": false, 00:39:38.144 "reset": true, 00:39:38.144 "nvme_admin": false, 00:39:38.144 "nvme_io": false, 00:39:38.144 "nvme_io_md": false, 00:39:38.144 "write_zeroes": true, 00:39:38.144 "zcopy": false, 00:39:38.144 "get_zone_info": false, 00:39:38.144 "zone_management": false, 00:39:38.144 "zone_append": false, 00:39:38.144 "compare": false, 00:39:38.144 "compare_and_write": false, 00:39:38.144 "abort": false, 00:39:38.144 "seek_hole": true, 00:39:38.144 "seek_data": true, 00:39:38.144 "copy": false, 00:39:38.144 "nvme_iov_md": false 00:39:38.144 }, 00:39:38.144 "driver_specific": { 00:39:38.144 "lvol": { 00:39:38.144 "lvol_store_uuid": "9b0b13bc-e9f2-47c3-aa2f-09c9e4cb862c", 00:39:38.144 "base_bdev": "nvme0n1", 00:39:38.144 "thin_provision": true, 00:39:38.144 "num_allocated_clusters": 0, 00:39:38.144 "snapshot": false, 00:39:38.144 "clone": false, 00:39:38.144 "esnap_clone": false 00:39:38.144 } 00:39:38.144 } 00:39:38.144 } 00:39:38.144 ]' 00:39:38.144 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:39:38.144 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:39:38.144 02:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:39:38.144 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:39:38.144 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:39:38.144 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:39:38.144 02:08:47 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:39:38.144 02:08:47 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:39:38.403 [2024-10-15 02:08:47.272812] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:39:38.403 02:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:39:38.403 02:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size f05b6321-1ac0-406e-be33-4f7668692978 00:39:38.403 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=f05b6321-1ac0-406e-be33-4f7668692978 00:39:38.403 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:39:38.403 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:39:38.403 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:39:38.403 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f05b6321-1ac0-406e-be33-4f7668692978 00:39:38.662 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:39:38.662 { 00:39:38.662 "name": "f05b6321-1ac0-406e-be33-4f7668692978", 00:39:38.662 "aliases": [ 00:39:38.662 "lvs/nvme0n1p0" 00:39:38.662 ], 00:39:38.662 "product_name": "Logical Volume", 00:39:38.662 "block_size": 4096, 00:39:38.662 "num_blocks": 26476544, 00:39:38.662 "uuid": "f05b6321-1ac0-406e-be33-4f7668692978", 00:39:38.662 "assigned_rate_limits": { 00:39:38.662 "rw_ios_per_sec": 0, 00:39:38.662 "rw_mbytes_per_sec": 0, 00:39:38.662 "r_mbytes_per_sec": 0, 00:39:38.662 "w_mbytes_per_sec": 0 00:39:38.662 }, 00:39:38.662 "claimed": false, 00:39:38.662 "zoned": false, 00:39:38.662 "supported_io_types": { 00:39:38.662 "read": true, 00:39:38.662 "write": true, 00:39:38.662 "unmap": true, 00:39:38.662 "flush": false, 00:39:38.662 "reset": true, 00:39:38.662 "nvme_admin": false, 00:39:38.662 "nvme_io": false, 00:39:38.662 "nvme_io_md": false, 00:39:38.662 "write_zeroes": true, 00:39:38.662 "zcopy": false, 00:39:38.662 "get_zone_info": false, 00:39:38.662 "zone_management": false, 00:39:38.662 "zone_append": false, 00:39:38.662 "compare": false, 00:39:38.662 "compare_and_write": false, 00:39:38.662 "abort": false, 00:39:38.662 "seek_hole": true, 00:39:38.662 "seek_data": true, 00:39:38.662 "copy": false, 00:39:38.662 "nvme_iov_md": false 00:39:38.662 }, 00:39:38.662 "driver_specific": { 00:39:38.662 "lvol": { 00:39:38.662 "lvol_store_uuid": "9b0b13bc-e9f2-47c3-aa2f-09c9e4cb862c", 00:39:38.662 "base_bdev": "nvme0n1", 00:39:38.662 "thin_provision": true, 00:39:38.662 "num_allocated_clusters": 0, 00:39:38.662 "snapshot": false, 00:39:38.662 "clone": false, 00:39:38.662 "esnap_clone": false 00:39:38.662 } 00:39:38.662 } 00:39:38.662 } 00:39:38.662 ]' 00:39:38.662 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:39:38.662 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:39:38.662 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:39:38.662 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:39:38.662 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:39:38.662 02:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:39:38.662 02:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:39:38.662 02:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f05b6321-1ac0-406e-be33-4f7668692978 -c nvc0n1p0 --l2p_dram_limit 20 00:39:38.922 [2024-10-15 02:08:47.754994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.922 [2024-10-15 02:08:47.755035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:38.922 [2024-10-15 02:08:47.755058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:39:38.922 [2024-10-15 02:08:47.755072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.922 [2024-10-15 02:08:47.755134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.922 [2024-10-15 02:08:47.755150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:38.922 [2024-10-15 02:08:47.755166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:39:38.922 [2024-10-15 02:08:47.755177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.922 [2024-10-15 02:08:47.755204] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:38.922 [2024-10-15 02:08:47.755946] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:38.922 [2024-10-15 02:08:47.755972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.922 [2024-10-15 02:08:47.755983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:38.922 [2024-10-15 02:08:47.755997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:39:38.922 [2024-10-15 02:08:47.756007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.922 [2024-10-15 02:08:47.756087] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 871bb205-c60b-4de5-8da3-ac3e1be13c93 00:39:38.922 [2024-10-15 02:08:47.758377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.922 [2024-10-15 02:08:47.758424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:39:38.922 [2024-10-15 02:08:47.758440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:39:38.922 [2024-10-15 02:08:47.758456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.922 [2024-10-15 02:08:47.771452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.922 [2024-10-15 02:08:47.771501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:38.922 [2024-10-15 02:08:47.771517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.943 ms 00:39:38.922 [2024-10-15 02:08:47.771534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.922 [2024-10-15 02:08:47.771678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.922 [2024-10-15 02:08:47.771701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:38.922 [2024-10-15 02:08:47.771713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:39:38.922 [2024-10-15 02:08:47.771726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.922 [2024-10-15 02:08:47.771804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.922 [2024-10-15 02:08:47.771830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:38.922 [2024-10-15 02:08:47.771841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:38.922 [2024-10-15 02:08:47.771854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.922 [2024-10-15 02:08:47.771884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:38.922 [2024-10-15 02:08:47.776948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.922 [2024-10-15 02:08:47.776998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:38.922 [2024-10-15 02:08:47.777018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.069 ms 00:39:38.922 [2024-10-15 02:08:47.777030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.922 [2024-10-15 02:08:47.777071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.922 [2024-10-15 02:08:47.777084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:38.922 [2024-10-15 02:08:47.777101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:38.922 [2024-10-15 02:08:47.777112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.922 [2024-10-15 02:08:47.777155] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:39:38.922 [2024-10-15 02:08:47.777298] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:38.922 [2024-10-15 02:08:47.777318] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:38.922 [2024-10-15 02:08:47.777333] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:38.922 [2024-10-15 02:08:47.777349] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:38.922 [2024-10-15 02:08:47.777362] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:38.922 [2024-10-15 02:08:47.777379] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:38.922 [2024-10-15 02:08:47.777391] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:38.922 [2024-10-15 02:08:47.777421] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:38.922 [2024-10-15 02:08:47.777436] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:38.922 [2024-10-15 02:08:47.777451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.922 [2024-10-15 02:08:47.777461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:38.922 [2024-10-15 02:08:47.777475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:39:38.922 [2024-10-15 02:08:47.777485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.922 [2024-10-15 02:08:47.777568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.922 [2024-10-15 02:08:47.777582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:38.922 [2024-10-15 02:08:47.777595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:39:38.922 [2024-10-15 02:08:47.777608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.922 [2024-10-15 02:08:47.777697] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:38.922 [2024-10-15 02:08:47.777712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:38.922 [2024-10-15 02:08:47.777726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:38.922 [2024-10-15 02:08:47.777737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:38.922 [2024-10-15 02:08:47.777750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:38.922 [2024-10-15 02:08:47.777760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:38.922 [2024-10-15 02:08:47.777772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:38.922 [2024-10-15 02:08:47.777781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:38.922 [2024-10-15 02:08:47.777818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:38.922 [2024-10-15 02:08:47.777829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:38.922 [2024-10-15 02:08:47.777843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:38.922 [2024-10-15 02:08:47.777855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:38.922 [2024-10-15 02:08:47.777872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:38.922 [2024-10-15 02:08:47.777883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:38.922 [2024-10-15 02:08:47.777895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:38.922 [2024-10-15 02:08:47.777904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:38.922 [2024-10-15 02:08:47.777916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:38.922 [2024-10-15 02:08:47.777926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:38.922 [2024-10-15 02:08:47.777939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:38.923 [2024-10-15 02:08:47.777950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:38.923 [2024-10-15 02:08:47.777961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:38.923 [2024-10-15 02:08:47.777970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:38.923 [2024-10-15 02:08:47.777982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:38.923 [2024-10-15 02:08:47.777992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:38.923 [2024-10-15 02:08:47.778003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:38.923 [2024-10-15 02:08:47.778012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:38.923 [2024-10-15 02:08:47.778024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:38.923 [2024-10-15 02:08:47.778033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:38.923 [2024-10-15 02:08:47.778047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:38.923 [2024-10-15 02:08:47.778061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:38.923 [2024-10-15 02:08:47.778073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:38.923 [2024-10-15 02:08:47.778082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:38.923 [2024-10-15 02:08:47.778093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:38.923 [2024-10-15 02:08:47.778103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:38.923 [2024-10-15 02:08:47.778114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:38.923 [2024-10-15 02:08:47.778124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:38.923 [2024-10-15 02:08:47.778136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:38.923 [2024-10-15 02:08:47.778146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:38.923 [2024-10-15 02:08:47.778157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:38.923 [2024-10-15 02:08:47.778166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:38.923 [2024-10-15 02:08:47.778178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:38.923 [2024-10-15 02:08:47.778187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:38.923 [2024-10-15 02:08:47.778199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:38.923 [2024-10-15 02:08:47.778208] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:38.923 [2024-10-15 02:08:47.778225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:38.923 [2024-10-15 02:08:47.778236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:38.923 [2024-10-15 02:08:47.778248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:38.923 [2024-10-15 02:08:47.778263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:38.923 [2024-10-15 02:08:47.778274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:38.923 [2024-10-15 02:08:47.778284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:38.923 [2024-10-15 02:08:47.778296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:38.923 [2024-10-15 02:08:47.778306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:38.923 [2024-10-15 02:08:47.778318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:38.923 [2024-10-15 02:08:47.778332] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:38.923 [2024-10-15 02:08:47.778348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:38.923 [2024-10-15 02:08:47.778360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:38.923 [2024-10-15 02:08:47.778372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:38.923 [2024-10-15 02:08:47.778382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:38.923 [2024-10-15 02:08:47.778394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:38.923 [2024-10-15 02:08:47.778418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:38.923 [2024-10-15 02:08:47.778436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:38.923 [2024-10-15 02:08:47.778449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:38.923 [2024-10-15 02:08:47.778462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:38.923 [2024-10-15 02:08:47.778472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:38.923 [2024-10-15 02:08:47.778485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:38.923 [2024-10-15 02:08:47.778496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:38.923 [2024-10-15 02:08:47.778509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:38.923 [2024-10-15 02:08:47.778531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:38.923 [2024-10-15 02:08:47.778547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:38.923 [2024-10-15 02:08:47.778558] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:38.923 [2024-10-15 02:08:47.778572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:38.923 [2024-10-15 02:08:47.778583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:38.923 [2024-10-15 02:08:47.778596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:38.923 [2024-10-15 02:08:47.778606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:38.923 [2024-10-15 02:08:47.778619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:38.923 [2024-10-15 02:08:47.778630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.923 [2024-10-15 02:08:47.778646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:38.923 [2024-10-15 02:08:47.778658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:39:38.923 [2024-10-15 02:08:47.778671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.923 [2024-10-15 02:08:47.778717] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:39:38.923 [2024-10-15 02:08:47.778736] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:39:42.210 [2024-10-15 02:08:50.707154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.210 [2024-10-15 02:08:50.707226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:39:42.210 [2024-10-15 02:08:50.707247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2928.452 ms 00:39:42.210 [2024-10-15 02:08:50.707261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.210 [2024-10-15 02:08:50.752468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.210 [2024-10-15 02:08:50.752541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:42.210 [2024-10-15 02:08:50.752563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.932 ms 00:39:42.210 [2024-10-15 02:08:50.752581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.210 [2024-10-15 02:08:50.752778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.210 [2024-10-15 02:08:50.752801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:42.210 [2024-10-15 02:08:50.752818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:39:42.210 [2024-10-15 02:08:50.752831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.210 [2024-10-15 02:08:50.794549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.210 [2024-10-15 02:08:50.794605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:42.210 [2024-10-15 02:08:50.794627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.667 ms 00:39:42.210 [2024-10-15 02:08:50.794642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.210 [2024-10-15 02:08:50.794688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.210 [2024-10-15 02:08:50.794705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:42.210 [2024-10-15 02:08:50.794718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:42.210 [2024-10-15 02:08:50.794730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.210 [2024-10-15 02:08:50.795527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.210 [2024-10-15 02:08:50.795555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:42.210 [2024-10-15 02:08:50.795569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:39:42.211 [2024-10-15 02:08:50.795585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:50.795747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:50.795766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:42.211 [2024-10-15 02:08:50.795778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:39:42.211 [2024-10-15 02:08:50.795790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:50.813026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:50.813064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:42.211 [2024-10-15 02:08:50.813079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.215 ms 00:39:42.211 [2024-10-15 02:08:50.813093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:50.825884] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:39:42.211 [2024-10-15 02:08:50.834846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:50.834876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:42.211 [2024-10-15 02:08:50.834894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.635 ms 00:39:42.211 [2024-10-15 02:08:50.834905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:50.907277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:50.907314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:39:42.211 [2024-10-15 02:08:50.907337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.335 ms 00:39:42.211 [2024-10-15 02:08:50.907348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:50.907571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:50.907591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:42.211 [2024-10-15 02:08:50.907606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:39:42.211 [2024-10-15 02:08:50.907617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:50.932081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:50.932115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:39:42.211 [2024-10-15 02:08:50.932133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.407 ms 00:39:42.211 [2024-10-15 02:08:50.932148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:50.955857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:50.955889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:39:42.211 [2024-10-15 02:08:50.955908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.664 ms 00:39:42.211 [2024-10-15 02:08:50.955918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:50.956621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:50.956646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:42.211 [2024-10-15 02:08:50.956669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:39:42.211 [2024-10-15 02:08:50.956680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:51.032518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:51.032552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:39:42.211 [2024-10-15 02:08:51.032571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.795 ms 00:39:42.211 [2024-10-15 02:08:51.032582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:51.059139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:51.059173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:39:42.211 [2024-10-15 02:08:51.059192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.485 ms 00:39:42.211 [2024-10-15 02:08:51.059203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:51.083382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:51.083422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:39:42.211 [2024-10-15 02:08:51.083441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.149 ms 00:39:42.211 [2024-10-15 02:08:51.083451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:51.107843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:51.107877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:42.211 [2024-10-15 02:08:51.107898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.362 ms 00:39:42.211 [2024-10-15 02:08:51.107908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:51.107942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:51.107955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:42.211 [2024-10-15 02:08:51.107970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:42.211 [2024-10-15 02:08:51.107980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:51.108081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.211 [2024-10-15 02:08:51.108097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:42.211 [2024-10-15 02:08:51.108111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:39:42.211 [2024-10-15 02:08:51.108121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.211 [2024-10-15 02:08:51.109529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3354.021 ms, result 0 00:39:42.211 { 00:39:42.211 "name": "ftl0", 00:39:42.211 "uuid": "871bb205-c60b-4de5-8da3-ac3e1be13c93" 00:39:42.211 } 00:39:42.211 02:08:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:39:42.211 02:08:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:39:42.211 02:08:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:39:42.470 02:08:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:39:42.729 [2024-10-15 02:08:51.521530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:39:42.729 I/O size of 69632 is greater than zero copy threshold (65536). 00:39:42.729 Zero copy mechanism will not be used. 00:39:42.729 Running I/O for 4 seconds... 00:39:44.601 1818.00 IOPS, 120.73 MiB/s [2024-10-15T02:08:54.549Z] 1829.00 IOPS, 121.46 MiB/s [2024-10-15T02:08:55.941Z] 1830.00 IOPS, 121.52 MiB/s [2024-10-15T02:08:55.941Z] 1832.00 IOPS, 121.66 MiB/s 00:39:46.929 Latency(us) 00:39:46.929 [2024-10-15T02:08:55.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:46.929 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:39:46.929 ftl0 : 4.00 1831.58 121.63 0.00 0.00 573.92 222.49 1906.50 00:39:46.929 [2024-10-15T02:08:55.941Z] =================================================================================================================== 00:39:46.929 [2024-10-15T02:08:55.941Z] Total : 1831.58 121.63 0.00 0.00 573.92 222.49 1906.50 00:39:46.929 [2024-10-15 02:08:55.530958] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:39:46.929 { 00:39:46.929 "results": [ 00:39:46.929 { 00:39:46.929 "job": "ftl0", 00:39:46.929 "core_mask": "0x1", 00:39:46.929 "workload": "randwrite", 00:39:46.929 "status": "finished", 00:39:46.929 "queue_depth": 1, 00:39:46.929 "io_size": 69632, 00:39:46.929 "runtime": 4.001468, 00:39:46.929 "iops": 1831.5778109433838, 00:39:46.929 "mibps": 121.62821400795909, 00:39:46.929 "io_failed": 0, 00:39:46.929 "io_timeout": 0, 00:39:46.929 "avg_latency_us": 573.9191724035277, 00:39:46.929 "min_latency_us": 222.48727272727274, 00:39:46.929 "max_latency_us": 1906.5018181818182 00:39:46.929 } 00:39:46.929 ], 00:39:46.929 "core_count": 1 00:39:46.929 } 00:39:46.929 02:08:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:39:46.929 [2024-10-15 02:08:55.674964] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:39:46.929 Running I/O for 4 seconds... 00:39:48.801 9407.00 IOPS, 36.75 MiB/s [2024-10-15T02:08:58.746Z] 9354.50 IOPS, 36.54 MiB/s [2024-10-15T02:09:00.122Z] 9325.00 IOPS, 36.43 MiB/s [2024-10-15T02:09:00.122Z] 9344.75 IOPS, 36.50 MiB/s 00:39:51.110 Latency(us) 00:39:51.110 [2024-10-15T02:09:00.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.110 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:39:51.110 ftl0 : 4.02 9335.24 36.47 0.00 0.00 13680.50 256.93 24188.74 00:39:51.110 [2024-10-15T02:09:00.122Z] =================================================================================================================== 00:39:51.110 [2024-10-15T02:09:00.122Z] Total : 9335.24 36.47 0.00 0.00 13680.50 0.00 24188.74 00:39:51.110 { 00:39:51.110 "results": [ 00:39:51.110 { 00:39:51.110 "job": "ftl0", 00:39:51.110 "core_mask": "0x1", 00:39:51.111 "workload": "randwrite", 00:39:51.111 "status": "finished", 00:39:51.111 "queue_depth": 128, 00:39:51.111 "io_size": 4096, 00:39:51.111 "runtime": 4.017145, 00:39:51.111 "iops": 9335.236841089878, 00:39:51.111 "mibps": 36.465768910507336, 00:39:51.111 "io_failed": 0, 00:39:51.111 "io_timeout": 0, 00:39:51.111 "avg_latency_us": 13680.500789360769, 00:39:51.111 "min_latency_us": 256.9309090909091, 00:39:51.111 "max_latency_us": 24188.741818181818 00:39:51.111 } 00:39:51.111 ], 00:39:51.111 "core_count": 1 00:39:51.111 } 00:39:51.111 [2024-10-15 02:08:59.700658] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:39:51.111 02:08:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:39:51.111 [2024-10-15 02:08:59.846852] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:39:51.111 Running I/O for 4 seconds... 00:39:52.982 6631.00 IOPS, 25.90 MiB/s [2024-10-15T02:09:02.930Z] 6809.50 IOPS, 26.60 MiB/s [2024-10-15T02:09:03.866Z] 6678.67 IOPS, 26.09 MiB/s [2024-10-15T02:09:04.125Z] 6627.75 IOPS, 25.89 MiB/s 00:39:55.113 Latency(us) 00:39:55.113 [2024-10-15T02:09:04.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:55.113 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:55.113 Verification LBA range: start 0x0 length 0x1400000 00:39:55.113 ftl0 : 4.01 6640.44 25.94 0.00 0.00 19216.25 297.89 21328.99 00:39:55.113 [2024-10-15T02:09:04.125Z] =================================================================================================================== 00:39:55.113 [2024-10-15T02:09:04.125Z] Total : 6640.44 25.94 0.00 0.00 19216.25 0.00 21328.99 00:39:55.113 [2024-10-15 02:09:03.874272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:39:55.113 { 00:39:55.113 "results": [ 00:39:55.113 { 00:39:55.113 "job": "ftl0", 00:39:55.113 "core_mask": "0x1", 00:39:55.113 "workload": "verify", 00:39:55.113 "status": "finished", 00:39:55.113 "verify_range": { 00:39:55.113 "start": 0, 00:39:55.113 "length": 20971520 00:39:55.113 }, 00:39:55.113 "queue_depth": 128, 00:39:55.113 "io_size": 4096, 00:39:55.113 "runtime": 4.011328, 00:39:55.113 "iops": 6640.44426185044, 00:39:55.113 "mibps": 25.93923539785328, 00:39:55.113 "io_failed": 0, 00:39:55.113 "io_timeout": 0, 00:39:55.113 "avg_latency_us": 19216.24559904712, 00:39:55.113 "min_latency_us": 297.8909090909091, 00:39:55.113 "max_latency_us": 21328.98909090909 00:39:55.113 } 00:39:55.113 ], 00:39:55.113 "core_count": 1 00:39:55.113 } 00:39:55.113 02:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:39:55.372 [2024-10-15 02:09:04.141801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.372 [2024-10-15 02:09:04.141854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:55.372 [2024-10-15 02:09:04.141877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:55.372 [2024-10-15 02:09:04.141889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.372 [2024-10-15 02:09:04.141922] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:55.372 [2024-10-15 02:09:04.145320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.372 [2024-10-15 02:09:04.145354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:55.372 [2024-10-15 02:09:04.145369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.377 ms 00:39:55.372 [2024-10-15 02:09:04.145386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.372 [2024-10-15 02:09:04.147404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.372 [2024-10-15 02:09:04.147490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:55.372 [2024-10-15 02:09:04.147506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.984 ms 00:39:55.372 [2024-10-15 02:09:04.147526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.372 [2024-10-15 02:09:04.305438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.372 [2024-10-15 02:09:04.305510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:55.372 [2024-10-15 02:09:04.305528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 157.892 ms 00:39:55.372 [2024-10-15 02:09:04.305542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.372 [2024-10-15 02:09:04.310639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.372 [2024-10-15 02:09:04.310688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:55.372 [2024-10-15 02:09:04.310703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.058 ms 00:39:55.372 [2024-10-15 02:09:04.310716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.372 [2024-10-15 02:09:04.335639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.372 [2024-10-15 02:09:04.335684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:55.372 [2024-10-15 02:09:04.335699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.851 ms 00:39:55.372 [2024-10-15 02:09:04.335713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.372 [2024-10-15 02:09:04.352051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.372 [2024-10-15 02:09:04.352097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:55.372 [2024-10-15 02:09:04.352113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.300 ms 00:39:55.372 [2024-10-15 02:09:04.352129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.372 [2024-10-15 02:09:04.352262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.372 [2024-10-15 02:09:04.352287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:55.372 [2024-10-15 02:09:04.352299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:39:55.372 [2024-10-15 02:09:04.352331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.372 [2024-10-15 02:09:04.376895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.372 [2024-10-15 02:09:04.376964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:55.372 [2024-10-15 02:09:04.376980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.528 ms 00:39:55.373 [2024-10-15 02:09:04.376992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.633 [2024-10-15 02:09:04.401081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.633 [2024-10-15 02:09:04.401123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:55.633 [2024-10-15 02:09:04.401138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.050 ms 00:39:55.633 [2024-10-15 02:09:04.401150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.633 [2024-10-15 02:09:04.424806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.633 [2024-10-15 02:09:04.424851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:55.633 [2024-10-15 02:09:04.424866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.619 ms 00:39:55.633 [2024-10-15 02:09:04.424881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.633 [2024-10-15 02:09:04.448379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.633 [2024-10-15 02:09:04.448430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:55.633 [2024-10-15 02:09:04.448446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.420 ms 00:39:55.633 [2024-10-15 02:09:04.448458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.633 [2024-10-15 02:09:04.448495] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:55.633 [2024-10-15 02:09:04.448524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.448993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:55.633 [2024-10-15 02:09:04.449253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:55.634 [2024-10-15 02:09:04.449856] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:55.634 [2024-10-15 02:09:04.449867] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 871bb205-c60b-4de5-8da3-ac3e1be13c93 00:39:55.634 [2024-10-15 02:09:04.449881] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:55.634 [2024-10-15 02:09:04.449891] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:55.634 [2024-10-15 02:09:04.449904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:55.634 [2024-10-15 02:09:04.449916] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:55.634 [2024-10-15 02:09:04.449931] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:55.634 [2024-10-15 02:09:04.449942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:55.634 [2024-10-15 02:09:04.449955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:55.634 [2024-10-15 02:09:04.449964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:55.634 [2024-10-15 02:09:04.449975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:55.634 [2024-10-15 02:09:04.449986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.634 [2024-10-15 02:09:04.450000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:55.634 [2024-10-15 02:09:04.450014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:39:55.634 [2024-10-15 02:09:04.450026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.634 [2024-10-15 02:09:04.464149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.634 [2024-10-15 02:09:04.464190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:55.634 [2024-10-15 02:09:04.464204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.083 ms 00:39:55.634 [2024-10-15 02:09:04.464217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.634 [2024-10-15 02:09:04.464740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.634 [2024-10-15 02:09:04.464778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:55.634 [2024-10-15 02:09:04.464792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:39:55.634 [2024-10-15 02:09:04.464835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.634 [2024-10-15 02:09:04.500204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.634 [2024-10-15 02:09:04.500251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:55.634 [2024-10-15 02:09:04.500266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.634 [2024-10-15 02:09:04.500279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.634 [2024-10-15 02:09:04.500344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.634 [2024-10-15 02:09:04.500365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:55.634 [2024-10-15 02:09:04.500377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.634 [2024-10-15 02:09:04.500389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.634 [2024-10-15 02:09:04.500498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.634 [2024-10-15 02:09:04.500533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:55.634 [2024-10-15 02:09:04.500546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.634 [2024-10-15 02:09:04.500559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.634 [2024-10-15 02:09:04.500581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.634 [2024-10-15 02:09:04.500597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:55.634 [2024-10-15 02:09:04.500612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.634 [2024-10-15 02:09:04.500628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.634 [2024-10-15 02:09:04.589115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.634 [2024-10-15 02:09:04.589188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:55.634 [2024-10-15 02:09:04.589205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.634 [2024-10-15 02:09:04.589219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.894 [2024-10-15 02:09:04.661200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.894 [2024-10-15 02:09:04.661268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:55.894 [2024-10-15 02:09:04.661286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.894 [2024-10-15 02:09:04.661299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.894 [2024-10-15 02:09:04.661470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.894 [2024-10-15 02:09:04.661494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:55.894 [2024-10-15 02:09:04.661506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.894 [2024-10-15 02:09:04.661520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.894 [2024-10-15 02:09:04.661598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.894 [2024-10-15 02:09:04.661620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:55.894 [2024-10-15 02:09:04.661632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.894 [2024-10-15 02:09:04.661652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.894 [2024-10-15 02:09:04.661780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.894 [2024-10-15 02:09:04.661814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:55.894 [2024-10-15 02:09:04.661828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.894 [2024-10-15 02:09:04.661843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.894 [2024-10-15 02:09:04.661892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.894 [2024-10-15 02:09:04.661913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:55.894 [2024-10-15 02:09:04.661925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.894 [2024-10-15 02:09:04.661956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.894 [2024-10-15 02:09:04.662012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.894 [2024-10-15 02:09:04.662030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:55.894 [2024-10-15 02:09:04.662041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.894 [2024-10-15 02:09:04.662054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.894 [2024-10-15 02:09:04.662113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.894 [2024-10-15 02:09:04.662133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:55.894 [2024-10-15 02:09:04.662157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.894 [2024-10-15 02:09:04.662178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.894 [2024-10-15 02:09:04.662338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.492 ms, result 0 00:39:55.894 [2024-10-15 02:09:04.663487] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:39:55.894 true 00:39:55.894 02:09:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75639 00:39:55.894 02:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 75639 ']' 00:39:55.894 02:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 75639 00:39:55.894 02:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:39:55.894 02:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:55.894 02:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75639 00:39:55.894 02:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:55.894 02:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:55.894 killing process with pid 75639 00:39:55.894 02:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75639' 00:39:55.894 02:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 75639 00:39:55.894 Received shutdown signal, test time was about 4.000000 seconds 00:39:55.894 00:39:55.894 Latency(us) 00:39:55.894 [2024-10-15T02:09:04.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:55.894 [2024-10-15T02:09:04.906Z] =================================================================================================================== 00:39:55.894 [2024-10-15T02:09:04.906Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:55.894 02:09:04 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 75639 00:39:55.894 [2024-10-15 02:09:04.731975] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106920 was disconnected and freed. delete nvme_qpair. 00:40:00.084 02:09:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:00.084 02:09:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:40:00.084 Remove shared memory files 00:40:00.084 02:09:08 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:40:00.084 02:09:08 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:40:00.084 02:09:08 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:40:00.084 02:09:08 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:40:00.084 02:09:08 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:40:00.084 02:09:08 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:40:00.084 00:40:00.084 real 0m25.148s 00:40:00.084 user 0m28.213s 00:40:00.084 sys 0m1.258s 00:40:00.084 02:09:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:00.084 ************************************ 00:40:00.084 02:09:08 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:00.084 END TEST ftl_bdevperf 00:40:00.084 ************************************ 00:40:00.084 02:09:08 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:40:00.084 02:09:08 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:00.084 02:09:08 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:00.084 02:09:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:40:00.084 ************************************ 00:40:00.084 START TEST ftl_trim 00:40:00.084 ************************************ 00:40:00.084 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:40:00.084 * Looking for test storage... 00:40:00.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:40:00.084 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:00.084 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:40:00.084 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:00.084 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:40:00.084 02:09:08 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:40:00.085 02:09:08 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:40:00.085 02:09:08 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:40:00.085 02:09:08 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:00.085 02:09:08 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:40:00.085 02:09:08 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:40:00.085 02:09:08 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:00.085 02:09:08 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:00.085 02:09:08 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:40:00.085 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:00.085 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:00.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.085 --rc genhtml_branch_coverage=1 00:40:00.085 --rc genhtml_function_coverage=1 00:40:00.085 --rc genhtml_legend=1 00:40:00.085 --rc geninfo_all_blocks=1 00:40:00.085 --rc geninfo_unexecuted_blocks=1 00:40:00.085 00:40:00.085 ' 00:40:00.085 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:00.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.085 --rc genhtml_branch_coverage=1 00:40:00.085 --rc genhtml_function_coverage=1 00:40:00.085 --rc genhtml_legend=1 00:40:00.085 --rc geninfo_all_blocks=1 00:40:00.085 --rc geninfo_unexecuted_blocks=1 00:40:00.085 00:40:00.085 ' 00:40:00.085 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:00.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.085 --rc genhtml_branch_coverage=1 00:40:00.085 --rc genhtml_function_coverage=1 00:40:00.085 --rc genhtml_legend=1 00:40:00.085 --rc geninfo_all_blocks=1 00:40:00.085 --rc geninfo_unexecuted_blocks=1 00:40:00.085 00:40:00.085 ' 00:40:00.085 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:00.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.085 --rc genhtml_branch_coverage=1 00:40:00.085 --rc genhtml_function_coverage=1 00:40:00.085 --rc genhtml_legend=1 00:40:00.085 --rc geninfo_all_blocks=1 00:40:00.085 --rc geninfo_unexecuted_blocks=1 00:40:00.085 00:40:00.085 ' 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75985 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75985 00:40:00.085 02:09:08 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:40:00.085 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 75985 ']' 00:40:00.085 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:00.085 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:00.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:00.085 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:00.085 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:00.085 02:09:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:40:00.085 [2024-10-15 02:09:08.840207] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:40:00.085 [2024-10-15 02:09:08.840385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75985 ] 00:40:00.085 [2024-10-15 02:09:09.014068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:00.344 [2024-10-15 02:09:09.223222] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:00.344 [2024-10-15 02:09:09.223355] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:00.344 [2024-10-15 02:09:09.223364] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:40:01.280 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:01.280 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:40:01.280 02:09:10 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:40:01.280 02:09:10 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:40:01.280 02:09:10 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:40:01.280 02:09:10 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:40:01.280 02:09:10 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:40:01.280 02:09:10 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:40:01.539 02:09:10 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:40:01.539 02:09:10 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:40:01.539 02:09:10 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:40:01.539 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:40:01.539 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:40:01.539 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:40:01.539 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:40:01.539 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:40:01.798 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:40:01.798 { 00:40:01.798 "name": "nvme0n1", 00:40:01.798 "aliases": [ 00:40:01.798 "6306d167-f091-41eb-b681-8735635af53d" 00:40:01.798 ], 00:40:01.798 "product_name": "NVMe disk", 00:40:01.798 "block_size": 4096, 00:40:01.798 "num_blocks": 1310720, 00:40:01.798 "uuid": "6306d167-f091-41eb-b681-8735635af53d", 00:40:01.798 "numa_id": -1, 00:40:01.798 "assigned_rate_limits": { 00:40:01.798 "rw_ios_per_sec": 0, 00:40:01.798 "rw_mbytes_per_sec": 0, 00:40:01.798 "r_mbytes_per_sec": 0, 00:40:01.798 "w_mbytes_per_sec": 0 00:40:01.798 }, 00:40:01.798 "claimed": true, 00:40:01.798 "claim_type": "read_many_write_one", 00:40:01.798 "zoned": false, 00:40:01.798 "supported_io_types": { 00:40:01.798 "read": true, 00:40:01.798 "write": true, 00:40:01.798 "unmap": true, 00:40:01.798 "flush": true, 00:40:01.798 "reset": true, 00:40:01.798 "nvme_admin": true, 00:40:01.798 "nvme_io": true, 00:40:01.798 "nvme_io_md": false, 00:40:01.798 "write_zeroes": true, 00:40:01.798 "zcopy": false, 00:40:01.798 "get_zone_info": false, 00:40:01.798 "zone_management": false, 00:40:01.798 "zone_append": false, 00:40:01.798 "compare": true, 00:40:01.798 "compare_and_write": false, 00:40:01.798 "abort": true, 00:40:01.798 "seek_hole": false, 00:40:01.798 "seek_data": false, 00:40:01.798 "copy": true, 00:40:01.798 "nvme_iov_md": false 00:40:01.798 }, 00:40:01.798 "driver_specific": { 00:40:01.798 "nvme": [ 00:40:01.798 { 00:40:01.798 "pci_address": "0000:00:11.0", 00:40:01.798 "trid": { 00:40:01.798 "trtype": "PCIe", 00:40:01.798 "traddr": "0000:00:11.0" 00:40:01.798 }, 00:40:01.798 "ctrlr_data": { 00:40:01.798 "cntlid": 0, 00:40:01.798 "vendor_id": "0x1b36", 00:40:01.798 "model_number": "QEMU NVMe Ctrl", 00:40:01.798 "serial_number": "12341", 00:40:01.798 "firmware_revision": "8.0.0", 00:40:01.798 "subnqn": "nqn.2019-08.org.qemu:12341", 00:40:01.798 "oacs": { 00:40:01.798 "security": 0, 00:40:01.799 "format": 1, 00:40:01.799 "firmware": 0, 00:40:01.799 "ns_manage": 1 00:40:01.799 }, 00:40:01.799 "multi_ctrlr": false, 00:40:01.799 "ana_reporting": false 00:40:01.799 }, 00:40:01.799 "vs": { 00:40:01.799 "nvme_version": "1.4" 00:40:01.799 }, 00:40:01.799 "ns_data": { 00:40:01.799 "id": 1, 00:40:01.799 "can_share": false 00:40:01.799 } 00:40:01.799 } 00:40:01.799 ], 00:40:01.799 "mp_policy": "active_passive" 00:40:01.799 } 00:40:01.799 } 00:40:01.799 ]' 00:40:01.799 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:40:01.799 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:40:01.799 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:40:01.799 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:40:01.799 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:40:01.799 02:09:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:40:01.799 02:09:10 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:40:01.799 02:09:10 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:40:01.799 02:09:10 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:40:01.799 02:09:10 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:40:01.799 02:09:10 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:40:02.057 02:09:10 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=9b0b13bc-e9f2-47c3-aa2f-09c9e4cb862c 00:40:02.058 02:09:10 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:40:02.058 02:09:10 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b0b13bc-e9f2-47c3-aa2f-09c9e4cb862c 00:40:02.316 [2024-10-15 02:09:11.166565] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200036416720 was disconnected and freed. delete nvme_qpair. 00:40:02.316 02:09:11 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:40:02.575 02:09:11 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=7af46add-c4f2-484c-bda1-34b6ab418b10 00:40:02.575 02:09:11 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7af46add-c4f2-484c-bda1-34b6ab418b10 00:40:02.837 02:09:11 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=a939a074-1d10-4055-bfc4-7babbb677858 00:40:02.837 02:09:11 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a939a074-1d10-4055-bfc4-7babbb677858 00:40:02.837 02:09:11 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:40:02.837 02:09:11 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:40:02.837 02:09:11 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=a939a074-1d10-4055-bfc4-7babbb677858 00:40:02.837 02:09:11 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:40:02.837 02:09:11 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size a939a074-1d10-4055-bfc4-7babbb677858 00:40:02.837 02:09:11 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=a939a074-1d10-4055-bfc4-7babbb677858 00:40:02.837 02:09:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:40:02.837 02:09:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:40:02.837 02:09:11 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:40:02.837 02:09:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a939a074-1d10-4055-bfc4-7babbb677858 00:40:03.129 02:09:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:40:03.129 { 00:40:03.129 "name": "a939a074-1d10-4055-bfc4-7babbb677858", 00:40:03.129 "aliases": [ 00:40:03.129 "lvs/nvme0n1p0" 00:40:03.129 ], 00:40:03.130 "product_name": "Logical Volume", 00:40:03.130 "block_size": 4096, 00:40:03.130 "num_blocks": 26476544, 00:40:03.130 "uuid": "a939a074-1d10-4055-bfc4-7babbb677858", 00:40:03.130 "assigned_rate_limits": { 00:40:03.130 "rw_ios_per_sec": 0, 00:40:03.130 "rw_mbytes_per_sec": 0, 00:40:03.130 "r_mbytes_per_sec": 0, 00:40:03.130 "w_mbytes_per_sec": 0 00:40:03.130 }, 00:40:03.130 "claimed": false, 00:40:03.130 "zoned": false, 00:40:03.130 "supported_io_types": { 00:40:03.130 "read": true, 00:40:03.130 "write": true, 00:40:03.130 "unmap": true, 00:40:03.130 "flush": false, 00:40:03.130 "reset": true, 00:40:03.130 "nvme_admin": false, 00:40:03.130 "nvme_io": false, 00:40:03.130 "nvme_io_md": false, 00:40:03.130 "write_zeroes": true, 00:40:03.130 "zcopy": false, 00:40:03.130 "get_zone_info": false, 00:40:03.130 "zone_management": false, 00:40:03.130 "zone_append": false, 00:40:03.130 "compare": false, 00:40:03.130 "compare_and_write": false, 00:40:03.130 "abort": false, 00:40:03.130 "seek_hole": true, 00:40:03.130 "seek_data": true, 00:40:03.130 "copy": false, 00:40:03.130 "nvme_iov_md": false 00:40:03.130 }, 00:40:03.130 "driver_specific": { 00:40:03.130 "lvol": { 00:40:03.130 "lvol_store_uuid": "7af46add-c4f2-484c-bda1-34b6ab418b10", 00:40:03.130 "base_bdev": "nvme0n1", 00:40:03.130 "thin_provision": true, 00:40:03.130 "num_allocated_clusters": 0, 00:40:03.130 "snapshot": false, 00:40:03.130 "clone": false, 00:40:03.130 "esnap_clone": false 00:40:03.130 } 00:40:03.130 } 00:40:03.130 } 00:40:03.130 ]' 00:40:03.130 02:09:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:40:03.130 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:40:03.130 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:40:03.130 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:40:03.130 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:40:03.130 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:40:03.130 02:09:12 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:40:03.130 02:09:12 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:40:03.130 02:09:12 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:40:03.401 [2024-10-15 02:09:12.398597] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035039da0 was disconnected and freed. delete nvme_qpair. 00:40:03.660 02:09:12 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:40:03.660 02:09:12 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:40:03.660 02:09:12 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size a939a074-1d10-4055-bfc4-7babbb677858 00:40:03.660 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=a939a074-1d10-4055-bfc4-7babbb677858 00:40:03.660 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:40:03.660 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:40:03.660 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:40:03.660 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a939a074-1d10-4055-bfc4-7babbb677858 00:40:03.660 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:40:03.660 { 00:40:03.660 "name": "a939a074-1d10-4055-bfc4-7babbb677858", 00:40:03.660 "aliases": [ 00:40:03.660 "lvs/nvme0n1p0" 00:40:03.660 ], 00:40:03.660 "product_name": "Logical Volume", 00:40:03.660 "block_size": 4096, 00:40:03.660 "num_blocks": 26476544, 00:40:03.660 "uuid": "a939a074-1d10-4055-bfc4-7babbb677858", 00:40:03.660 "assigned_rate_limits": { 00:40:03.660 "rw_ios_per_sec": 0, 00:40:03.660 "rw_mbytes_per_sec": 0, 00:40:03.660 "r_mbytes_per_sec": 0, 00:40:03.660 "w_mbytes_per_sec": 0 00:40:03.660 }, 00:40:03.660 "claimed": false, 00:40:03.660 "zoned": false, 00:40:03.660 "supported_io_types": { 00:40:03.660 "read": true, 00:40:03.660 "write": true, 00:40:03.660 "unmap": true, 00:40:03.660 "flush": false, 00:40:03.660 "reset": true, 00:40:03.660 "nvme_admin": false, 00:40:03.660 "nvme_io": false, 00:40:03.660 "nvme_io_md": false, 00:40:03.660 "write_zeroes": true, 00:40:03.660 "zcopy": false, 00:40:03.660 "get_zone_info": false, 00:40:03.660 "zone_management": false, 00:40:03.660 "zone_append": false, 00:40:03.660 "compare": false, 00:40:03.660 "compare_and_write": false, 00:40:03.660 "abort": false, 00:40:03.660 "seek_hole": true, 00:40:03.660 "seek_data": true, 00:40:03.660 "copy": false, 00:40:03.660 "nvme_iov_md": false 00:40:03.660 }, 00:40:03.660 "driver_specific": { 00:40:03.660 "lvol": { 00:40:03.660 "lvol_store_uuid": "7af46add-c4f2-484c-bda1-34b6ab418b10", 00:40:03.660 "base_bdev": "nvme0n1", 00:40:03.660 "thin_provision": true, 00:40:03.660 "num_allocated_clusters": 0, 00:40:03.660 "snapshot": false, 00:40:03.660 "clone": false, 00:40:03.660 "esnap_clone": false 00:40:03.660 } 00:40:03.660 } 00:40:03.660 } 00:40:03.660 ]' 00:40:03.660 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:40:03.919 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:40:03.919 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:40:03.919 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:40:03.919 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:40:03.919 02:09:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:40:03.919 02:09:12 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:40:03.919 02:09:12 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:40:04.179 [2024-10-15 02:09:13.010666] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200035039da0 was disconnected and freed. delete nvme_qpair. 00:40:04.179 02:09:13 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:40:04.179 02:09:13 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:40:04.179 02:09:13 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size a939a074-1d10-4055-bfc4-7babbb677858 00:40:04.179 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=a939a074-1d10-4055-bfc4-7babbb677858 00:40:04.179 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:40:04.179 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:40:04.179 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:40:04.179 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a939a074-1d10-4055-bfc4-7babbb677858 00:40:04.437 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:40:04.437 { 00:40:04.437 "name": "a939a074-1d10-4055-bfc4-7babbb677858", 00:40:04.437 "aliases": [ 00:40:04.437 "lvs/nvme0n1p0" 00:40:04.437 ], 00:40:04.437 "product_name": "Logical Volume", 00:40:04.437 "block_size": 4096, 00:40:04.438 "num_blocks": 26476544, 00:40:04.438 "uuid": "a939a074-1d10-4055-bfc4-7babbb677858", 00:40:04.438 "assigned_rate_limits": { 00:40:04.438 "rw_ios_per_sec": 0, 00:40:04.438 "rw_mbytes_per_sec": 0, 00:40:04.438 "r_mbytes_per_sec": 0, 00:40:04.438 "w_mbytes_per_sec": 0 00:40:04.438 }, 00:40:04.438 "claimed": false, 00:40:04.438 "zoned": false, 00:40:04.438 "supported_io_types": { 00:40:04.438 "read": true, 00:40:04.438 "write": true, 00:40:04.438 "unmap": true, 00:40:04.438 "flush": false, 00:40:04.438 "reset": true, 00:40:04.438 "nvme_admin": false, 00:40:04.438 "nvme_io": false, 00:40:04.438 "nvme_io_md": false, 00:40:04.438 "write_zeroes": true, 00:40:04.438 "zcopy": false, 00:40:04.438 "get_zone_info": false, 00:40:04.438 "zone_management": false, 00:40:04.438 "zone_append": false, 00:40:04.438 "compare": false, 00:40:04.438 "compare_and_write": false, 00:40:04.438 "abort": false, 00:40:04.438 "seek_hole": true, 00:40:04.438 "seek_data": true, 00:40:04.438 "copy": false, 00:40:04.438 "nvme_iov_md": false 00:40:04.438 }, 00:40:04.438 "driver_specific": { 00:40:04.438 "lvol": { 00:40:04.438 "lvol_store_uuid": "7af46add-c4f2-484c-bda1-34b6ab418b10", 00:40:04.438 "base_bdev": "nvme0n1", 00:40:04.438 "thin_provision": true, 00:40:04.438 "num_allocated_clusters": 0, 00:40:04.438 "snapshot": false, 00:40:04.438 "clone": false, 00:40:04.438 "esnap_clone": false 00:40:04.438 } 00:40:04.438 } 00:40:04.438 } 00:40:04.438 ]' 00:40:04.438 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:40:04.438 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:40:04.438 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:40:04.438 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:40:04.438 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:40:04.438 02:09:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:40:04.438 02:09:13 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:40:04.438 02:09:13 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a939a074-1d10-4055-bfc4-7babbb677858 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:40:04.697 [2024-10-15 02:09:13.547482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.697 [2024-10-15 02:09:13.547939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:04.698 [2024-10-15 02:09:13.547989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:04.698 [2024-10-15 02:09:13.548010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.698 [2024-10-15 02:09:13.551709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.698 [2024-10-15 02:09:13.551773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:04.698 [2024-10-15 02:09:13.551798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.654 ms 00:40:04.698 [2024-10-15 02:09:13.551824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.698 [2024-10-15 02:09:13.552085] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:04.698 [2024-10-15 02:09:13.553072] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:04.698 [2024-10-15 02:09:13.553125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.698 [2024-10-15 02:09:13.553158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:04.698 [2024-10-15 02:09:13.553173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:40:04.698 [2024-10-15 02:09:13.553186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.698 [2024-10-15 02:09:13.553500] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d1339797-0eae-46ce-abba-f9aa2d840265 00:40:04.698 [2024-10-15 02:09:13.556158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.698 [2024-10-15 02:09:13.556210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:40:04.698 [2024-10-15 02:09:13.556248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:40:04.698 [2024-10-15 02:09:13.556274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.698 [2024-10-15 02:09:13.569747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.698 [2024-10-15 02:09:13.569817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:04.698 [2024-10-15 02:09:13.569856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.313 ms 00:40:04.698 [2024-10-15 02:09:13.569867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.698 [2024-10-15 02:09:13.570125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.698 [2024-10-15 02:09:13.570160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:04.698 [2024-10-15 02:09:13.570180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:40:04.698 [2024-10-15 02:09:13.570196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.698 [2024-10-15 02:09:13.570255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.698 [2024-10-15 02:09:13.570270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:04.698 [2024-10-15 02:09:13.570285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:40:04.698 [2024-10-15 02:09:13.570296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.698 [2024-10-15 02:09:13.570354] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:04.698 [2024-10-15 02:09:13.575758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.698 [2024-10-15 02:09:13.575835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:04.698 [2024-10-15 02:09:13.575851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.418 ms 00:40:04.698 [2024-10-15 02:09:13.575865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.698 [2024-10-15 02:09:13.575944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.698 [2024-10-15 02:09:13.575968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:04.698 [2024-10-15 02:09:13.575984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:40:04.698 [2024-10-15 02:09:13.576001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.698 [2024-10-15 02:09:13.576100] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:40:04.698 [2024-10-15 02:09:13.576262] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:04.698 [2024-10-15 02:09:13.576290] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:04.698 [2024-10-15 02:09:13.576314] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:04.698 [2024-10-15 02:09:13.576330] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:04.698 [2024-10-15 02:09:13.576348] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:04.698 [2024-10-15 02:09:13.576361] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:04.698 [2024-10-15 02:09:13.576376] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:04.698 [2024-10-15 02:09:13.576387] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:04.698 [2024-10-15 02:09:13.576417] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:04.698 [2024-10-15 02:09:13.576434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.698 [2024-10-15 02:09:13.576449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:04.698 [2024-10-15 02:09:13.576462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:40:04.698 [2024-10-15 02:09:13.576477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.698 [2024-10-15 02:09:13.576591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.698 [2024-10-15 02:09:13.576615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:04.698 [2024-10-15 02:09:13.576628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:40:04.698 [2024-10-15 02:09:13.576642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.698 [2024-10-15 02:09:13.576787] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:04.698 [2024-10-15 02:09:13.576807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:04.698 [2024-10-15 02:09:13.576821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:04.698 [2024-10-15 02:09:13.576836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:04.698 [2024-10-15 02:09:13.576847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:04.698 [2024-10-15 02:09:13.576860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:04.698 [2024-10-15 02:09:13.576872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:04.698 [2024-10-15 02:09:13.576885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:04.698 [2024-10-15 02:09:13.576896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:04.698 [2024-10-15 02:09:13.576910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:04.698 [2024-10-15 02:09:13.576920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:04.698 [2024-10-15 02:09:13.576934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:04.698 [2024-10-15 02:09:13.576944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:04.698 [2024-10-15 02:09:13.576961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:04.698 [2024-10-15 02:09:13.576972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:04.698 [2024-10-15 02:09:13.576986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:04.698 [2024-10-15 02:09:13.576997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:04.698 [2024-10-15 02:09:13.577010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:04.698 [2024-10-15 02:09:13.577021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:04.698 [2024-10-15 02:09:13.577035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:04.698 [2024-10-15 02:09:13.577046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:04.698 [2024-10-15 02:09:13.577059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:04.698 [2024-10-15 02:09:13.577070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:04.698 [2024-10-15 02:09:13.577085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:04.698 [2024-10-15 02:09:13.577096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:04.698 [2024-10-15 02:09:13.577109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:04.698 [2024-10-15 02:09:13.577120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:04.698 [2024-10-15 02:09:13.577133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:04.698 [2024-10-15 02:09:13.577143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:04.698 [2024-10-15 02:09:13.577160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:04.698 [2024-10-15 02:09:13.577171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:04.698 [2024-10-15 02:09:13.577184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:04.698 [2024-10-15 02:09:13.577195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:04.698 [2024-10-15 02:09:13.577209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:04.698 [2024-10-15 02:09:13.577221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:04.698 [2024-10-15 02:09:13.577235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:04.698 [2024-10-15 02:09:13.577247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:04.698 [2024-10-15 02:09:13.577266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:04.698 [2024-10-15 02:09:13.577277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:04.698 [2024-10-15 02:09:13.577291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:04.698 [2024-10-15 02:09:13.577302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:04.698 [2024-10-15 02:09:13.577316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:04.698 [2024-10-15 02:09:13.577327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:04.698 [2024-10-15 02:09:13.577350] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:04.698 [2024-10-15 02:09:13.577366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:04.698 [2024-10-15 02:09:13.577383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:04.698 [2024-10-15 02:09:13.577395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:04.698 [2024-10-15 02:09:13.577440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:04.698 [2024-10-15 02:09:13.577455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:04.698 [2024-10-15 02:09:13.577469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:04.698 [2024-10-15 02:09:13.577480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:04.698 [2024-10-15 02:09:13.577493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:04.699 [2024-10-15 02:09:13.577504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:04.699 [2024-10-15 02:09:13.577523] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:04.699 [2024-10-15 02:09:13.577537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:04.699 [2024-10-15 02:09:13.577553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:04.699 [2024-10-15 02:09:13.577564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:04.699 [2024-10-15 02:09:13.577594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:04.699 [2024-10-15 02:09:13.577606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:04.699 [2024-10-15 02:09:13.577620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:04.699 [2024-10-15 02:09:13.577631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:04.699 [2024-10-15 02:09:13.577647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:04.699 [2024-10-15 02:09:13.577659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:04.699 [2024-10-15 02:09:13.577673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:04.699 [2024-10-15 02:09:13.577685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:04.699 [2024-10-15 02:09:13.577699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:04.699 [2024-10-15 02:09:13.577718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:04.699 [2024-10-15 02:09:13.577733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:04.699 [2024-10-15 02:09:13.577745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:04.699 [2024-10-15 02:09:13.577770] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:04.699 [2024-10-15 02:09:13.577783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:04.699 [2024-10-15 02:09:13.577798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:04.699 [2024-10-15 02:09:13.577810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:04.699 [2024-10-15 02:09:13.577825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:04.699 [2024-10-15 02:09:13.577838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:04.699 [2024-10-15 02:09:13.577854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:04.699 [2024-10-15 02:09:13.577866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:04.699 [2024-10-15 02:09:13.577883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:40:04.699 [2024-10-15 02:09:13.577896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:04.699 [2024-10-15 02:09:13.578008] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:40:04.699 [2024-10-15 02:09:13.578034] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:40:08.896 [2024-10-15 02:09:17.118062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.118154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:40:08.896 [2024-10-15 02:09:17.118198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3540.067 ms 00:40:08.896 [2024-10-15 02:09:17.118211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.175755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.175843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:08.896 [2024-10-15 02:09:17.175886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.010 ms 00:40:08.896 [2024-10-15 02:09:17.175903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.176207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.176238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:08.896 [2024-10-15 02:09:17.176256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:40:08.896 [2024-10-15 02:09:17.176269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.221627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.221707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:08.896 [2024-10-15 02:09:17.221747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.306 ms 00:40:08.896 [2024-10-15 02:09:17.221759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.221922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.221943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:08.896 [2024-10-15 02:09:17.221992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:08.896 [2024-10-15 02:09:17.222014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.222946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.222994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:08.896 [2024-10-15 02:09:17.223027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:40:08.896 [2024-10-15 02:09:17.223039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.223262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.223278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:08.896 [2024-10-15 02:09:17.223297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:40:08.896 [2024-10-15 02:09:17.223308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.245099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.245163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:08.896 [2024-10-15 02:09:17.245200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.743 ms 00:40:08.896 [2024-10-15 02:09:17.245212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.258219] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:08.896 [2024-10-15 02:09:17.283960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.284076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:08.896 [2024-10-15 02:09:17.284098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.577 ms 00:40:08.896 [2024-10-15 02:09:17.284114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.376276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.376370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:40:08.896 [2024-10-15 02:09:17.376392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.964 ms 00:40:08.896 [2024-10-15 02:09:17.376411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.376738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.376776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:08.896 [2024-10-15 02:09:17.376792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:40:08.896 [2024-10-15 02:09:17.376807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.402058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.402138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:40:08.896 [2024-10-15 02:09:17.402156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.205 ms 00:40:08.896 [2024-10-15 02:09:17.402170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.426070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.426151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:40:08.896 [2024-10-15 02:09:17.426169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.817 ms 00:40:08.896 [2024-10-15 02:09:17.426183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.427197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.896 [2024-10-15 02:09:17.427252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:08.896 [2024-10-15 02:09:17.427268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:40:08.896 [2024-10-15 02:09:17.427288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.896 [2024-10-15 02:09:17.508896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.897 [2024-10-15 02:09:17.509109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:40:08.897 [2024-10-15 02:09:17.509138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.548 ms 00:40:08.897 [2024-10-15 02:09:17.509154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.897 [2024-10-15 02:09:17.536222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.897 [2024-10-15 02:09:17.536413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:40:08.897 [2024-10-15 02:09:17.536450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.941 ms 00:40:08.897 [2024-10-15 02:09:17.536468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.897 [2024-10-15 02:09:17.560936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.897 [2024-10-15 02:09:17.560998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:40:08.897 [2024-10-15 02:09:17.561014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.374 ms 00:40:08.897 [2024-10-15 02:09:17.561027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.897 [2024-10-15 02:09:17.585707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.897 [2024-10-15 02:09:17.585889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:08.897 [2024-10-15 02:09:17.585915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.589 ms 00:40:08.897 [2024-10-15 02:09:17.585933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.897 [2024-10-15 02:09:17.586046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.897 [2024-10-15 02:09:17.586070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:08.897 [2024-10-15 02:09:17.586084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:08.897 [2024-10-15 02:09:17.586103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.897 [2024-10-15 02:09:17.586228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:08.897 [2024-10-15 02:09:17.586246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:08.897 [2024-10-15 02:09:17.586261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:40:08.897 [2024-10-15 02:09:17.586274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:08.897 [2024-10-15 02:09:17.587949] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:08.897 [2024-10-15 02:09:17.591395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4040.087 ms, result 0 00:40:08.897 [2024-10-15 02:09:17.592453] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:08.897 { 00:40:08.897 "name": "ftl0", 00:40:08.897 "uuid": "d1339797-0eae-46ce-abba-f9aa2d840265" 00:40:08.897 } 00:40:08.897 02:09:17 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:40:08.897 02:09:17 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:40:08.897 02:09:17 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:08.897 02:09:17 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:40:08.897 02:09:17 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:08.897 02:09:17 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:08.897 02:09:17 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:08.897 02:09:17 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:40:09.155 [ 00:40:09.155 { 00:40:09.155 "name": "ftl0", 00:40:09.155 "aliases": [ 00:40:09.155 "d1339797-0eae-46ce-abba-f9aa2d840265" 00:40:09.155 ], 00:40:09.155 "product_name": "FTL disk", 00:40:09.155 "block_size": 4096, 00:40:09.155 "num_blocks": 23592960, 00:40:09.155 "uuid": "d1339797-0eae-46ce-abba-f9aa2d840265", 00:40:09.155 "assigned_rate_limits": { 00:40:09.155 "rw_ios_per_sec": 0, 00:40:09.155 "rw_mbytes_per_sec": 0, 00:40:09.155 "r_mbytes_per_sec": 0, 00:40:09.155 "w_mbytes_per_sec": 0 00:40:09.155 }, 00:40:09.155 "claimed": false, 00:40:09.155 "zoned": false, 00:40:09.155 "supported_io_types": { 00:40:09.155 "read": true, 00:40:09.155 "write": true, 00:40:09.155 "unmap": true, 00:40:09.155 "flush": true, 00:40:09.155 "reset": false, 00:40:09.155 "nvme_admin": false, 00:40:09.155 "nvme_io": false, 00:40:09.155 "nvme_io_md": false, 00:40:09.155 "write_zeroes": true, 00:40:09.155 "zcopy": false, 00:40:09.155 "get_zone_info": false, 00:40:09.155 "zone_management": false, 00:40:09.155 "zone_append": false, 00:40:09.155 "compare": false, 00:40:09.155 "compare_and_write": false, 00:40:09.155 "abort": false, 00:40:09.155 "seek_hole": false, 00:40:09.155 "seek_data": false, 00:40:09.155 "copy": false, 00:40:09.155 "nvme_iov_md": false 00:40:09.155 }, 00:40:09.155 "driver_specific": { 00:40:09.155 "ftl": { 00:40:09.155 "base_bdev": "a939a074-1d10-4055-bfc4-7babbb677858", 00:40:09.155 "cache": "nvc0n1p0" 00:40:09.155 } 00:40:09.155 } 00:40:09.155 } 00:40:09.155 ] 00:40:09.155 02:09:18 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:40:09.155 02:09:18 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:40:09.155 02:09:18 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:40:09.413 02:09:18 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:40:09.413 02:09:18 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:40:09.672 02:09:18 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:40:09.672 { 00:40:09.672 "name": "ftl0", 00:40:09.672 "aliases": [ 00:40:09.672 "d1339797-0eae-46ce-abba-f9aa2d840265" 00:40:09.672 ], 00:40:09.672 "product_name": "FTL disk", 00:40:09.672 "block_size": 4096, 00:40:09.672 "num_blocks": 23592960, 00:40:09.672 "uuid": "d1339797-0eae-46ce-abba-f9aa2d840265", 00:40:09.672 "assigned_rate_limits": { 00:40:09.672 "rw_ios_per_sec": 0, 00:40:09.672 "rw_mbytes_per_sec": 0, 00:40:09.672 "r_mbytes_per_sec": 0, 00:40:09.672 "w_mbytes_per_sec": 0 00:40:09.672 }, 00:40:09.672 "claimed": false, 00:40:09.672 "zoned": false, 00:40:09.672 "supported_io_types": { 00:40:09.672 "read": true, 00:40:09.672 "write": true, 00:40:09.672 "unmap": true, 00:40:09.672 "flush": true, 00:40:09.672 "reset": false, 00:40:09.672 "nvme_admin": false, 00:40:09.672 "nvme_io": false, 00:40:09.672 "nvme_io_md": false, 00:40:09.672 "write_zeroes": true, 00:40:09.672 "zcopy": false, 00:40:09.672 "get_zone_info": false, 00:40:09.672 "zone_management": false, 00:40:09.672 "zone_append": false, 00:40:09.672 "compare": false, 00:40:09.672 "compare_and_write": false, 00:40:09.672 "abort": false, 00:40:09.672 "seek_hole": false, 00:40:09.672 "seek_data": false, 00:40:09.672 "copy": false, 00:40:09.672 "nvme_iov_md": false 00:40:09.672 }, 00:40:09.672 "driver_specific": { 00:40:09.672 "ftl": { 00:40:09.672 "base_bdev": "a939a074-1d10-4055-bfc4-7babbb677858", 00:40:09.672 "cache": "nvc0n1p0" 00:40:09.672 } 00:40:09.672 } 00:40:09.672 } 00:40:09.672 ]' 00:40:09.672 02:09:18 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:40:09.931 02:09:18 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:40:09.931 02:09:18 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:40:09.931 [2024-10-15 02:09:18.890666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:09.931 [2024-10-15 02:09:18.890721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:09.931 [2024-10-15 02:09:18.890745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:40:09.931 [2024-10-15 02:09:18.890757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:09.931 [2024-10-15 02:09:18.890827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:40:09.931 [2024-10-15 02:09:18.894315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:09.931 [2024-10-15 02:09:18.894349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:09.931 [2024-10-15 02:09:18.894364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.465 ms 00:40:09.931 [2024-10-15 02:09:18.894377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:09.931 [2024-10-15 02:09:18.895107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:09.931 [2024-10-15 02:09:18.895154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:09.931 [2024-10-15 02:09:18.895167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:40:09.931 [2024-10-15 02:09:18.895180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:09.931 [2024-10-15 02:09:18.898002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:09.931 [2024-10-15 02:09:18.898035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:09.931 [2024-10-15 02:09:18.898048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.782 ms 00:40:09.931 [2024-10-15 02:09:18.898061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:09.931 [2024-10-15 02:09:18.903858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:09.931 [2024-10-15 02:09:18.903893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:09.931 [2024-10-15 02:09:18.903906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.761 ms 00:40:09.931 [2024-10-15 02:09:18.903922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:09.931 [2024-10-15 02:09:18.929848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:09.931 [2024-10-15 02:09:18.929901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:09.931 [2024-10-15 02:09:18.929919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.844 ms 00:40:09.931 [2024-10-15 02:09:18.929931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.191 [2024-10-15 02:09:18.947127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:10.191 [2024-10-15 02:09:18.947170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:10.191 [2024-10-15 02:09:18.947187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.121 ms 00:40:10.191 [2024-10-15 02:09:18.947201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.191 [2024-10-15 02:09:18.947465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:10.191 [2024-10-15 02:09:18.947489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:10.191 [2024-10-15 02:09:18.947502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:40:10.191 [2024-10-15 02:09:18.947519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.191 [2024-10-15 02:09:18.972510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:10.191 [2024-10-15 02:09:18.972716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:10.191 [2024-10-15 02:09:18.972742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.939 ms 00:40:10.191 [2024-10-15 02:09:18.972760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.191 [2024-10-15 02:09:18.997312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:10.191 [2024-10-15 02:09:18.997354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:10.191 [2024-10-15 02:09:18.997369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.448 ms 00:40:10.191 [2024-10-15 02:09:18.997381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.191 [2024-10-15 02:09:19.021220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:10.191 [2024-10-15 02:09:19.021263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:10.191 [2024-10-15 02:09:19.021278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.729 ms 00:40:10.191 [2024-10-15 02:09:19.021291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.191 [2024-10-15 02:09:19.044890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:10.191 [2024-10-15 02:09:19.044932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:10.191 [2024-10-15 02:09:19.044947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.432 ms 00:40:10.191 [2024-10-15 02:09:19.044960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.191 [2024-10-15 02:09:19.045030] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:10.191 [2024-10-15 02:09:19.045057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:10.191 [2024-10-15 02:09:19.045431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.045997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:10.192 [2024-10-15 02:09:19.046381] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:10.192 [2024-10-15 02:09:19.046391] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1339797-0eae-46ce-abba-f9aa2d840265 00:40:10.192 [2024-10-15 02:09:19.046413] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:10.192 [2024-10-15 02:09:19.046425] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:10.192 [2024-10-15 02:09:19.046437] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:10.192 [2024-10-15 02:09:19.046448] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:10.192 [2024-10-15 02:09:19.046460] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:10.192 [2024-10-15 02:09:19.046471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:10.192 [2024-10-15 02:09:19.046483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:10.192 [2024-10-15 02:09:19.046492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:10.192 [2024-10-15 02:09:19.046503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:10.192 [2024-10-15 02:09:19.046514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:10.192 [2024-10-15 02:09:19.046549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:10.192 [2024-10-15 02:09:19.046562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:40:10.192 [2024-10-15 02:09:19.046580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.192 [2024-10-15 02:09:19.061163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:10.192 [2024-10-15 02:09:19.061335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:10.192 [2024-10-15 02:09:19.061461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.521 ms 00:40:10.192 [2024-10-15 02:09:19.061567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.192 [2024-10-15 02:09:19.062140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:10.192 [2024-10-15 02:09:19.062262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:10.192 [2024-10-15 02:09:19.062373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:40:10.192 [2024-10-15 02:09:19.062551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.192 [2024-10-15 02:09:19.118725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.192 [2024-10-15 02:09:19.118934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:10.193 [2024-10-15 02:09:19.119057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.193 [2024-10-15 02:09:19.119110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.193 [2024-10-15 02:09:19.119288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.193 [2024-10-15 02:09:19.119376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:10.193 [2024-10-15 02:09:19.119483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.193 [2024-10-15 02:09:19.119541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.193 [2024-10-15 02:09:19.119656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.193 [2024-10-15 02:09:19.119840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:10.193 [2024-10-15 02:09:19.119904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.193 [2024-10-15 02:09:19.119946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.193 [2024-10-15 02:09:19.120039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.193 [2024-10-15 02:09:19.120232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:10.193 [2024-10-15 02:09:19.120263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.193 [2024-10-15 02:09:19.120282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.452 [2024-10-15 02:09:19.215799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.452 [2024-10-15 02:09:19.215877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:10.452 [2024-10-15 02:09:19.215897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.452 [2024-10-15 02:09:19.215911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.452 [2024-10-15 02:09:19.288247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.452 [2024-10-15 02:09:19.288313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:10.452 [2024-10-15 02:09:19.288337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.452 [2024-10-15 02:09:19.288354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.452 [2024-10-15 02:09:19.288563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.452 [2024-10-15 02:09:19.288587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:10.452 [2024-10-15 02:09:19.288601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.452 [2024-10-15 02:09:19.288636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.452 [2024-10-15 02:09:19.288711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.452 [2024-10-15 02:09:19.288735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:10.452 [2024-10-15 02:09:19.288753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.452 [2024-10-15 02:09:19.288766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.452 [2024-10-15 02:09:19.288926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.452 [2024-10-15 02:09:19.288954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:10.452 [2024-10-15 02:09:19.288966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.452 [2024-10-15 02:09:19.288979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.452 [2024-10-15 02:09:19.289055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.452 [2024-10-15 02:09:19.289083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:10.452 [2024-10-15 02:09:19.289095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.452 [2024-10-15 02:09:19.289111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.452 [2024-10-15 02:09:19.289188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.452 [2024-10-15 02:09:19.289205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:10.452 [2024-10-15 02:09:19.289216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.452 [2024-10-15 02:09:19.289229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.452 [2024-10-15 02:09:19.289306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:10.452 [2024-10-15 02:09:19.289329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:10.452 [2024-10-15 02:09:19.289340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:10.452 [2024-10-15 02:09:19.289353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:10.452 [2024-10-15 02:09:19.289624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 398.941 ms, result 0 00:40:10.452 [2024-10-15 02:09:19.291286] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20003500cce0 was disconnected and freed. delete nvme_qpair. 00:40:10.452 [2024-10-15 02:09:19.292517] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035039ca0 was disconnected and freed. delete nvme_qpair. 00:40:10.452 true 00:40:10.452 02:09:19 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75985 00:40:10.452 02:09:19 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75985 ']' 00:40:10.452 02:09:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75985 00:40:10.452 02:09:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:40:10.452 02:09:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:10.452 02:09:19 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75985 00:40:10.452 killing process with pid 75985 00:40:10.452 02:09:19 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:10.452 02:09:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:10.452 02:09:19 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75985' 00:40:10.452 02:09:19 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 75985 00:40:10.452 02:09:19 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 75985 00:40:11.389 [2024-10-15 02:09:20.286014] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200036416920 was disconnected and freed. delete nvme_qpair. 00:40:15.576 02:09:24 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:40:16.512 65536+0 records in 00:40:16.512 65536+0 records out 00:40:16.512 268435456 bytes (268 MB, 256 MiB) copied, 1.02578 s, 262 MB/s 00:40:16.512 02:09:25 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:16.512 [2024-10-15 02:09:25.295939] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:40:16.512 [2024-10-15 02:09:25.296141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76194 ] 00:40:16.512 [2024-10-15 02:09:25.478716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.770 [2024-10-15 02:09:25.729785] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:17.338 [2024-10-15 02:09:26.069482] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:17.338 [2024-10-15 02:09:26.069553] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:17.338 [2024-10-15 02:09:26.217732] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:40:17.338 [2024-10-15 02:09:26.231457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.231496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:17.338 [2024-10-15 02:09:26.231516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:40:17.338 [2024-10-15 02:09:26.231527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.234477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.234510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:17.338 [2024-10-15 02:09:26.234534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.922 ms 00:40:17.338 [2024-10-15 02:09:26.234551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.234654] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:17.338 [2024-10-15 02:09:26.235428] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:17.338 [2024-10-15 02:09:26.235458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.235474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:17.338 [2024-10-15 02:09:26.235485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:40:17.338 [2024-10-15 02:09:26.235495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.237854] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:17.338 [2024-10-15 02:09:26.252028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.252062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:17.338 [2024-10-15 02:09:26.252078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.176 ms 00:40:17.338 [2024-10-15 02:09:26.252089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.252210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.252238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:17.338 [2024-10-15 02:09:26.252251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:40:17.338 [2024-10-15 02:09:26.252262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.263755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.263790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:17.338 [2024-10-15 02:09:26.263804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.439 ms 00:40:17.338 [2024-10-15 02:09:26.263816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.263976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.264000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:17.338 [2024-10-15 02:09:26.264014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:40:17.338 [2024-10-15 02:09:26.264026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.264070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.264085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:17.338 [2024-10-15 02:09:26.264097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:40:17.338 [2024-10-15 02:09:26.264108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.264152] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:17.338 [2024-10-15 02:09:26.268878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.268907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:17.338 [2024-10-15 02:09:26.268921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.749 ms 00:40:17.338 [2024-10-15 02:09:26.268932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.268997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.269013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:17.338 [2024-10-15 02:09:26.269026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:17.338 [2024-10-15 02:09:26.269037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.269063] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:17.338 [2024-10-15 02:09:26.269105] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:17.338 [2024-10-15 02:09:26.269151] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:17.338 [2024-10-15 02:09:26.269174] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:17.338 [2024-10-15 02:09:26.269269] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:17.338 [2024-10-15 02:09:26.269284] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:17.338 [2024-10-15 02:09:26.269298] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:17.338 [2024-10-15 02:09:26.269313] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:17.338 [2024-10-15 02:09:26.269325] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:17.338 [2024-10-15 02:09:26.269337] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:17.338 [2024-10-15 02:09:26.269348] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:17.338 [2024-10-15 02:09:26.269358] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:17.338 [2024-10-15 02:09:26.269369] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:17.338 [2024-10-15 02:09:26.269381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.269397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:17.338 [2024-10-15 02:09:26.269424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:40:17.338 [2024-10-15 02:09:26.269437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.269523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.338 [2024-10-15 02:09:26.269537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:17.338 [2024-10-15 02:09:26.269548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:40:17.338 [2024-10-15 02:09:26.269558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.338 [2024-10-15 02:09:26.269654] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:17.338 [2024-10-15 02:09:26.269669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:17.338 [2024-10-15 02:09:26.269686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:17.338 [2024-10-15 02:09:26.269698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:17.338 [2024-10-15 02:09:26.269718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:17.338 [2024-10-15 02:09:26.269728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:17.338 [2024-10-15 02:09:26.269738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:17.338 [2024-10-15 02:09:26.269749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:17.338 [2024-10-15 02:09:26.269759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:17.338 [2024-10-15 02:09:26.269768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:17.338 [2024-10-15 02:09:26.269791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:17.338 [2024-10-15 02:09:26.269801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:17.339 [2024-10-15 02:09:26.269811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:17.339 [2024-10-15 02:09:26.269821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:17.339 [2024-10-15 02:09:26.269831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:17.339 [2024-10-15 02:09:26.269840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:17.339 [2024-10-15 02:09:26.269850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:17.339 [2024-10-15 02:09:26.269860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:17.339 [2024-10-15 02:09:26.269870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:17.339 [2024-10-15 02:09:26.269880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:17.339 [2024-10-15 02:09:26.269890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:17.339 [2024-10-15 02:09:26.269899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:17.339 [2024-10-15 02:09:26.269909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:17.339 [2024-10-15 02:09:26.269919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:17.339 [2024-10-15 02:09:26.269929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:17.339 [2024-10-15 02:09:26.269938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:17.339 [2024-10-15 02:09:26.269948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:17.339 [2024-10-15 02:09:26.269958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:17.339 [2024-10-15 02:09:26.269968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:17.339 [2024-10-15 02:09:26.269977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:17.339 [2024-10-15 02:09:26.269986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:17.339 [2024-10-15 02:09:26.269996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:17.339 [2024-10-15 02:09:26.270006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:17.339 [2024-10-15 02:09:26.270015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:17.339 [2024-10-15 02:09:26.270025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:17.339 [2024-10-15 02:09:26.270034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:17.339 [2024-10-15 02:09:26.270057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:17.339 [2024-10-15 02:09:26.270068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:17.339 [2024-10-15 02:09:26.270078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:17.339 [2024-10-15 02:09:26.270087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:17.339 [2024-10-15 02:09:26.270096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:17.339 [2024-10-15 02:09:26.270106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:17.339 [2024-10-15 02:09:26.270116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:17.339 [2024-10-15 02:09:26.270125] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:17.339 [2024-10-15 02:09:26.270136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:17.339 [2024-10-15 02:09:26.270147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:17.339 [2024-10-15 02:09:26.270157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:17.339 [2024-10-15 02:09:26.270168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:17.339 [2024-10-15 02:09:26.270179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:17.339 [2024-10-15 02:09:26.270188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:17.339 [2024-10-15 02:09:26.270198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:17.339 [2024-10-15 02:09:26.270207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:17.339 [2024-10-15 02:09:26.270217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:17.339 [2024-10-15 02:09:26.270229] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:17.339 [2024-10-15 02:09:26.270247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:17.339 [2024-10-15 02:09:26.270260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:17.339 [2024-10-15 02:09:26.270270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:17.339 [2024-10-15 02:09:26.270280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:17.339 [2024-10-15 02:09:26.270291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:17.339 [2024-10-15 02:09:26.270300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:17.339 [2024-10-15 02:09:26.270311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:17.339 [2024-10-15 02:09:26.270321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:17.339 [2024-10-15 02:09:26.270331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:17.339 [2024-10-15 02:09:26.270341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:17.339 [2024-10-15 02:09:26.270351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:17.339 [2024-10-15 02:09:26.270360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:17.339 [2024-10-15 02:09:26.270370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:17.339 [2024-10-15 02:09:26.270380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:17.339 [2024-10-15 02:09:26.270401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:17.339 [2024-10-15 02:09:26.270425] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:17.339 [2024-10-15 02:09:26.270437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:17.339 [2024-10-15 02:09:26.270453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:17.339 [2024-10-15 02:09:26.270464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:17.339 [2024-10-15 02:09:26.270474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:17.339 [2024-10-15 02:09:26.270485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:17.339 [2024-10-15 02:09:26.270495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.339 [2024-10-15 02:09:26.270506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:17.339 [2024-10-15 02:09:26.270517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:40:17.339 [2024-10-15 02:09:26.270539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.339 [2024-10-15 02:09:26.321394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.339 [2024-10-15 02:09:26.321458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:17.339 [2024-10-15 02:09:26.321479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.782 ms 00:40:17.339 [2024-10-15 02:09:26.321491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.339 [2024-10-15 02:09:26.321709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.339 [2024-10-15 02:09:26.321735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:17.339 [2024-10-15 02:09:26.321750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:40:17.339 [2024-10-15 02:09:26.321761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.598 [2024-10-15 02:09:26.363259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.598 [2024-10-15 02:09:26.363302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:17.598 [2024-10-15 02:09:26.363319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.462 ms 00:40:17.598 [2024-10-15 02:09:26.363332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.598 [2024-10-15 02:09:26.363482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.598 [2024-10-15 02:09:26.363501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:17.598 [2024-10-15 02:09:26.363515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:17.598 [2024-10-15 02:09:26.363532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.598 [2024-10-15 02:09:26.364255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.598 [2024-10-15 02:09:26.364279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:17.598 [2024-10-15 02:09:26.364292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:40:17.598 [2024-10-15 02:09:26.364303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.598 [2024-10-15 02:09:26.364479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.598 [2024-10-15 02:09:26.364499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:17.598 [2024-10-15 02:09:26.364512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:40:17.598 [2024-10-15 02:09:26.364523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.382539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.382572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:17.599 [2024-10-15 02:09:26.382588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.983 ms 00:40:17.599 [2024-10-15 02:09:26.382605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.396833] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:40:17.599 [2024-10-15 02:09:26.396872] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:17.599 [2024-10-15 02:09:26.396889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.396901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:17.599 [2024-10-15 02:09:26.396914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.148 ms 00:40:17.599 [2024-10-15 02:09:26.396925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.420661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.420697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:17.599 [2024-10-15 02:09:26.420712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.653 ms 00:40:17.599 [2024-10-15 02:09:26.420731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.433189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.433222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:17.599 [2024-10-15 02:09:26.433237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.373 ms 00:40:17.599 [2024-10-15 02:09:26.433248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.445472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.445504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:17.599 [2024-10-15 02:09:26.445518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.143 ms 00:40:17.599 [2024-10-15 02:09:26.445528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.446168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.446190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:17.599 [2024-10-15 02:09:26.446203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:40:17.599 [2024-10-15 02:09:26.446214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.516176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.516253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:17.599 [2024-10-15 02:09:26.516275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.930 ms 00:40:17.599 [2024-10-15 02:09:26.516287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.526148] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:17.599 [2024-10-15 02:09:26.548314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.548371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:17.599 [2024-10-15 02:09:26.548391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.875 ms 00:40:17.599 [2024-10-15 02:09:26.548412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.548553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.548573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:17.599 [2024-10-15 02:09:26.548588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:17.599 [2024-10-15 02:09:26.548599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.548691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.548713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:17.599 [2024-10-15 02:09:26.548725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:40:17.599 [2024-10-15 02:09:26.548737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.548770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.548785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:17.599 [2024-10-15 02:09:26.548798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:40:17.599 [2024-10-15 02:09:26.548809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.548866] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:17.599 [2024-10-15 02:09:26.548883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.548894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:17.599 [2024-10-15 02:09:26.548912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:40:17.599 [2024-10-15 02:09:26.548923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.576180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.576219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:17.599 [2024-10-15 02:09:26.576235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.230 ms 00:40:17.599 [2024-10-15 02:09:26.576246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.576375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:17.599 [2024-10-15 02:09:26.576398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:17.599 [2024-10-15 02:09:26.576458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:40:17.599 [2024-10-15 02:09:26.576472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:17.599 [2024-10-15 02:09:26.577938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:17.599 [2024-10-15 02:09:26.581390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 346.092 ms, result 0 00:40:17.599 [2024-10-15 02:09:26.582215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:17.599 [2024-10-15 02:09:26.595890] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:18.975  [2024-10-15T02:09:28.926Z] Copying: 23/256 [MB] (23 MBps) [2024-10-15T02:09:29.860Z] Copying: 46/256 [MB] (23 MBps) [2024-10-15T02:09:30.796Z] Copying: 69/256 [MB] (23 MBps) [2024-10-15T02:09:31.731Z] Copying: 93/256 [MB] (23 MBps) [2024-10-15T02:09:32.666Z] Copying: 116/256 [MB] (23 MBps) [2024-10-15T02:09:33.601Z] Copying: 140/256 [MB] (23 MBps) [2024-10-15T02:09:35.002Z] Copying: 163/256 [MB] (23 MBps) [2024-10-15T02:09:35.936Z] Copying: 187/256 [MB] (23 MBps) [2024-10-15T02:09:36.871Z] Copying: 211/256 [MB] (23 MBps) [2024-10-15T02:09:37.807Z] Copying: 234/256 [MB] (23 MBps) [2024-10-15T02:09:37.807Z] Copying: 256/256 [MB] (average 23 MBps)[2024-10-15 02:09:37.500555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:28.795 [2024-10-15 02:09:37.511094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.511128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:28.796 [2024-10-15 02:09:37.511146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:28.796 [2024-10-15 02:09:37.511157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.511184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:40:28.796 [2024-10-15 02:09:37.514474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.514498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:28.796 [2024-10-15 02:09:37.514510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:40:28.796 [2024-10-15 02:09:37.514520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.516474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.516504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:28.796 [2024-10-15 02:09:37.516524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.918 ms 00:40:28.796 [2024-10-15 02:09:37.516535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.522865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.522898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:28.796 [2024-10-15 02:09:37.522912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.310 ms 00:40:28.796 [2024-10-15 02:09:37.522922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.528792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.528822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:28.796 [2024-10-15 02:09:37.528842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.818 ms 00:40:28.796 [2024-10-15 02:09:37.528853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.552671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.552705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:28.796 [2024-10-15 02:09:37.552719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.767 ms 00:40:28.796 [2024-10-15 02:09:37.552730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.568250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.568282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:28.796 [2024-10-15 02:09:37.568297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.479 ms 00:40:28.796 [2024-10-15 02:09:37.568308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.568474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.568493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:28.796 [2024-10-15 02:09:37.568506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:40:28.796 [2024-10-15 02:09:37.568516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.593153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.593196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:28.796 [2024-10-15 02:09:37.593211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.611 ms 00:40:28.796 [2024-10-15 02:09:37.593221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.617247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.617280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:28.796 [2024-10-15 02:09:37.617293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.987 ms 00:40:28.796 [2024-10-15 02:09:37.617302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.640850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.640882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:28.796 [2024-10-15 02:09:37.640896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.509 ms 00:40:28.796 [2024-10-15 02:09:37.640906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.664443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.796 [2024-10-15 02:09:37.664475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:28.796 [2024-10-15 02:09:37.664489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.473 ms 00:40:28.796 [2024-10-15 02:09:37.664499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.796 [2024-10-15 02:09:37.664552] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:28.796 [2024-10-15 02:09:37.664577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.664997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.665007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.665017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.665028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.665038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.665048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.665058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.665069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.665085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.665095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.665105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:28.796 [2024-10-15 02:09:37.665116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:28.797 [2024-10-15 02:09:37.665664] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:28.797 [2024-10-15 02:09:37.665674] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1339797-0eae-46ce-abba-f9aa2d840265 00:40:28.797 [2024-10-15 02:09:37.665685] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:28.797 [2024-10-15 02:09:37.665694] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:28.797 [2024-10-15 02:09:37.665703] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:28.797 [2024-10-15 02:09:37.665719] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:28.797 [2024-10-15 02:09:37.665728] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:28.797 [2024-10-15 02:09:37.665738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:28.797 [2024-10-15 02:09:37.665748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:28.797 [2024-10-15 02:09:37.665756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:28.797 [2024-10-15 02:09:37.665765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:28.797 [2024-10-15 02:09:37.665775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.797 [2024-10-15 02:09:37.665785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:28.797 [2024-10-15 02:09:37.665796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:40:28.797 [2024-10-15 02:09:37.665806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.797 [2024-10-15 02:09:37.679935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.797 [2024-10-15 02:09:37.679970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:28.797 [2024-10-15 02:09:37.679984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.090 ms 00:40:28.797 [2024-10-15 02:09:37.679995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.797 [2024-10-15 02:09:37.680464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.797 [2024-10-15 02:09:37.680486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:28.797 [2024-10-15 02:09:37.680498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:40:28.797 [2024-10-15 02:09:37.680509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.797 [2024-10-15 02:09:37.715816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.797 [2024-10-15 02:09:37.715852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:28.797 [2024-10-15 02:09:37.715867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.797 [2024-10-15 02:09:37.715878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.797 [2024-10-15 02:09:37.715961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.797 [2024-10-15 02:09:37.715977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:28.797 [2024-10-15 02:09:37.715989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.797 [2024-10-15 02:09:37.716000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.797 [2024-10-15 02:09:37.716056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.797 [2024-10-15 02:09:37.716079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:28.797 [2024-10-15 02:09:37.716091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.797 [2024-10-15 02:09:37.716102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.797 [2024-10-15 02:09:37.716126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:28.797 [2024-10-15 02:09:37.716140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:28.797 [2024-10-15 02:09:37.716151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:28.797 [2024-10-15 02:09:37.716168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.057 [2024-10-15 02:09:37.806568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:29.057 [2024-10-15 02:09:37.806653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:29.057 [2024-10-15 02:09:37.806674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:29.057 [2024-10-15 02:09:37.806686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.057 [2024-10-15 02:09:37.878953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:29.057 [2024-10-15 02:09:37.879006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:29.057 [2024-10-15 02:09:37.879023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:29.057 [2024-10-15 02:09:37.879035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.057 [2024-10-15 02:09:37.879110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:29.057 [2024-10-15 02:09:37.879127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:29.057 [2024-10-15 02:09:37.879140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:29.057 [2024-10-15 02:09:37.879158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.057 [2024-10-15 02:09:37.879196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:29.057 [2024-10-15 02:09:37.879210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:29.057 [2024-10-15 02:09:37.879222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:29.057 [2024-10-15 02:09:37.879233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.057 [2024-10-15 02:09:37.879352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:29.057 [2024-10-15 02:09:37.879370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:29.057 [2024-10-15 02:09:37.879383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:29.057 [2024-10-15 02:09:37.879400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.057 [2024-10-15 02:09:37.879477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:29.057 [2024-10-15 02:09:37.879495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:29.057 [2024-10-15 02:09:37.879507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:29.057 [2024-10-15 02:09:37.879518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.057 [2024-10-15 02:09:37.879575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:29.057 [2024-10-15 02:09:37.879590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:29.057 [2024-10-15 02:09:37.879601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:29.057 [2024-10-15 02:09:37.879613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.057 [2024-10-15 02:09:37.879678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:29.057 [2024-10-15 02:09:37.879693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:29.057 [2024-10-15 02:09:37.879707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:29.057 [2024-10-15 02:09:37.879719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.057 [2024-10-15 02:09:37.879909] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.787 ms, result 0 00:40:29.057 [2024-10-15 02:09:37.881547] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019efcce0 was disconnected and freed. delete nvme_qpair. 00:40:29.057 [2024-10-15 02:09:37.882766] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001992aca0 was disconnected and freed. delete nvme_qpair. 00:40:29.057 [2024-10-15 02:09:37.886995] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:40:30.432 00:40:30.432 00:40:30.432 02:09:39 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76335 00:40:30.432 02:09:39 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:40:30.432 02:09:39 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76335 00:40:30.432 02:09:39 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76335 ']' 00:40:30.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:30.432 02:09:39 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:30.432 02:09:39 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:30.432 02:09:39 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:30.432 02:09:39 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:30.432 02:09:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:40:30.432 [2024-10-15 02:09:39.227534] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:40:30.432 [2024-10-15 02:09:39.228734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76335 ] 00:40:30.432 [2024-10-15 02:09:39.405294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.691 [2024-10-15 02:09:39.612926] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:31.627 02:09:40 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:31.627 02:09:40 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:40:31.627 02:09:40 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:40:31.627 [2024-10-15 02:09:40.628194] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:31.627 [2024-10-15 02:09:40.628262] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:31.887 [2024-10-15 02:09:40.792606] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:40:31.887 [2024-10-15 02:09:40.806218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.887 [2024-10-15 02:09:40.806260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:31.887 [2024-10-15 02:09:40.806279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:40:31.887 [2024-10-15 02:09:40.806295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.887 [2024-10-15 02:09:40.809281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.887 [2024-10-15 02:09:40.809320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:31.887 [2024-10-15 02:09:40.809336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.963 ms 00:40:31.887 [2024-10-15 02:09:40.809349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.887 [2024-10-15 02:09:40.809465] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:31.887 [2024-10-15 02:09:40.810206] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:31.887 [2024-10-15 02:09:40.810237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.887 [2024-10-15 02:09:40.810250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:31.887 [2024-10-15 02:09:40.810261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:40:31.887 [2024-10-15 02:09:40.810273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.887 [2024-10-15 02:09:40.812642] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:31.887 [2024-10-15 02:09:40.826845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.887 [2024-10-15 02:09:40.826881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:31.887 [2024-10-15 02:09:40.826900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.201 ms 00:40:31.887 [2024-10-15 02:09:40.826911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.887 [2024-10-15 02:09:40.827016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.887 [2024-10-15 02:09:40.827035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:31.887 [2024-10-15 02:09:40.827050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:40:31.887 [2024-10-15 02:09:40.827060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.887 [2024-10-15 02:09:40.838620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.887 [2024-10-15 02:09:40.838662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:31.887 [2024-10-15 02:09:40.838680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.498 ms 00:40:31.887 [2024-10-15 02:09:40.838694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.887 [2024-10-15 02:09:40.838845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.887 [2024-10-15 02:09:40.838864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:31.887 [2024-10-15 02:09:40.838878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:40:31.887 [2024-10-15 02:09:40.838889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.887 [2024-10-15 02:09:40.838930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.887 [2024-10-15 02:09:40.838943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:31.887 [2024-10-15 02:09:40.838957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:31.887 [2024-10-15 02:09:40.838967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.887 [2024-10-15 02:09:40.839007] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:31.887 [2024-10-15 02:09:40.843691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.887 [2024-10-15 02:09:40.843726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:31.887 [2024-10-15 02:09:40.843743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:40:31.887 [2024-10-15 02:09:40.843758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.887 [2024-10-15 02:09:40.843816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.887 [2024-10-15 02:09:40.843835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:31.887 [2024-10-15 02:09:40.843847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:31.887 [2024-10-15 02:09:40.843860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.887 [2024-10-15 02:09:40.843902] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:31.887 [2024-10-15 02:09:40.843934] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:31.887 [2024-10-15 02:09:40.843979] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:31.888 [2024-10-15 02:09:40.844007] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:31.888 [2024-10-15 02:09:40.844100] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:31.888 [2024-10-15 02:09:40.844117] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:31.888 [2024-10-15 02:09:40.844130] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:31.888 [2024-10-15 02:09:40.844148] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:31.888 [2024-10-15 02:09:40.844160] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:31.888 [2024-10-15 02:09:40.844174] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:31.888 [2024-10-15 02:09:40.844186] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:31.888 [2024-10-15 02:09:40.844213] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:31.888 [2024-10-15 02:09:40.844226] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:31.888 [2024-10-15 02:09:40.844240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.888 [2024-10-15 02:09:40.844250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:31.888 [2024-10-15 02:09:40.844263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:40:31.888 [2024-10-15 02:09:40.844273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.888 [2024-10-15 02:09:40.844358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.888 [2024-10-15 02:09:40.844371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:31.888 [2024-10-15 02:09:40.844384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:40:31.888 [2024-10-15 02:09:40.844394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.888 [2024-10-15 02:09:40.844514] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:31.888 [2024-10-15 02:09:40.844538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:31.888 [2024-10-15 02:09:40.844553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:31.888 [2024-10-15 02:09:40.844565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:31.888 [2024-10-15 02:09:40.844578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:31.888 [2024-10-15 02:09:40.844588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:31.888 [2024-10-15 02:09:40.844605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:31.888 [2024-10-15 02:09:40.844615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:31.888 [2024-10-15 02:09:40.844628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:31.888 [2024-10-15 02:09:40.844638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:31.888 [2024-10-15 02:09:40.844650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:31.888 [2024-10-15 02:09:40.844659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:31.888 [2024-10-15 02:09:40.844671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:31.888 [2024-10-15 02:09:40.844680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:31.888 [2024-10-15 02:09:40.844692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:31.888 [2024-10-15 02:09:40.844701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:31.888 [2024-10-15 02:09:40.844712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:31.888 [2024-10-15 02:09:40.844732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:31.888 [2024-10-15 02:09:40.844745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:31.888 [2024-10-15 02:09:40.844755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:31.888 [2024-10-15 02:09:40.844767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:31.888 [2024-10-15 02:09:40.844776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:31.888 [2024-10-15 02:09:40.844790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:31.888 [2024-10-15 02:09:40.844799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:31.888 [2024-10-15 02:09:40.844811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:31.888 [2024-10-15 02:09:40.844820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:31.888 [2024-10-15 02:09:40.844832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:31.888 [2024-10-15 02:09:40.844841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:31.888 [2024-10-15 02:09:40.844852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:31.888 [2024-10-15 02:09:40.844861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:31.888 [2024-10-15 02:09:40.844873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:31.888 [2024-10-15 02:09:40.844882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:31.888 [2024-10-15 02:09:40.844893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:31.888 [2024-10-15 02:09:40.844903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:31.888 [2024-10-15 02:09:40.844916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:31.888 [2024-10-15 02:09:40.844925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:31.888 [2024-10-15 02:09:40.844936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:31.888 [2024-10-15 02:09:40.844946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:31.888 [2024-10-15 02:09:40.844963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:31.888 [2024-10-15 02:09:40.844973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:31.888 [2024-10-15 02:09:40.844985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:31.888 [2024-10-15 02:09:40.844994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:31.888 [2024-10-15 02:09:40.845006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:31.888 [2024-10-15 02:09:40.845015] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:31.888 [2024-10-15 02:09:40.845028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:31.888 [2024-10-15 02:09:40.845038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:31.888 [2024-10-15 02:09:40.845050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:31.888 [2024-10-15 02:09:40.845060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:31.888 [2024-10-15 02:09:40.845072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:31.888 [2024-10-15 02:09:40.845081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:31.888 [2024-10-15 02:09:40.845093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:31.888 [2024-10-15 02:09:40.845102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:31.888 [2024-10-15 02:09:40.845114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:31.888 [2024-10-15 02:09:40.845125] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:31.888 [2024-10-15 02:09:40.845143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:31.888 [2024-10-15 02:09:40.845153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:31.888 [2024-10-15 02:09:40.845166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:31.888 [2024-10-15 02:09:40.845175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:31.888 [2024-10-15 02:09:40.845187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:31.888 [2024-10-15 02:09:40.845197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:31.888 [2024-10-15 02:09:40.845210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:31.888 [2024-10-15 02:09:40.845219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:31.888 [2024-10-15 02:09:40.845231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:31.888 [2024-10-15 02:09:40.845240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:31.888 [2024-10-15 02:09:40.845252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:31.888 [2024-10-15 02:09:40.845262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:31.888 [2024-10-15 02:09:40.845273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:31.888 [2024-10-15 02:09:40.845282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:31.888 [2024-10-15 02:09:40.845294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:31.888 [2024-10-15 02:09:40.845304] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:31.888 [2024-10-15 02:09:40.845324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:31.888 [2024-10-15 02:09:40.845336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:31.888 [2024-10-15 02:09:40.845349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:31.888 [2024-10-15 02:09:40.845359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:31.888 [2024-10-15 02:09:40.845371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:31.888 [2024-10-15 02:09:40.845382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.888 [2024-10-15 02:09:40.845395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:31.888 [2024-10-15 02:09:40.845418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:40:31.888 [2024-10-15 02:09:40.845433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.888 [2024-10-15 02:09:40.886253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.889 [2024-10-15 02:09:40.886313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:31.889 [2024-10-15 02:09:40.886332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.736 ms 00:40:31.889 [2024-10-15 02:09:40.886346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:31.889 [2024-10-15 02:09:40.886550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:31.889 [2024-10-15 02:09:40.886575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:31.889 [2024-10-15 02:09:40.886588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:40:31.889 [2024-10-15 02:09:40.886604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:40.940843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:40.940905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:32.148 [2024-10-15 02:09:40.940924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.206 ms 00:40:32.148 [2024-10-15 02:09:40.940942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:40.941104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:40.941132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:32.148 [2024-10-15 02:09:40.941152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:32.148 [2024-10-15 02:09:40.941168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:40.941925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:40.941958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:32.148 [2024-10-15 02:09:40.941973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:40:32.148 [2024-10-15 02:09:40.941990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:40.942164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:40.942188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:32.148 [2024-10-15 02:09:40.942200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:40:32.148 [2024-10-15 02:09:40.942228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:40.963323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:40.963364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:32.148 [2024-10-15 02:09:40.963382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.066 ms 00:40:32.148 [2024-10-15 02:09:40.963396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:40.977974] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:40:32.148 [2024-10-15 02:09:40.978020] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:32.148 [2024-10-15 02:09:40.978037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:40.978053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:32.148 [2024-10-15 02:09:40.978066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.500 ms 00:40:32.148 [2024-10-15 02:09:40.978082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:41.002458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:41.002505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:32.148 [2024-10-15 02:09:41.002560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.292 ms 00:40:32.148 [2024-10-15 02:09:41.002583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:41.015193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:41.015243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:32.148 [2024-10-15 02:09:41.015259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.516 ms 00:40:32.148 [2024-10-15 02:09:41.015275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:41.027734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:41.027777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:32.148 [2024-10-15 02:09:41.027793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.379 ms 00:40:32.148 [2024-10-15 02:09:41.027810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:41.028509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:41.028543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:32.148 [2024-10-15 02:09:41.028558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:40:32.148 [2024-10-15 02:09:41.028580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:41.099997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:41.100088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:32.148 [2024-10-15 02:09:41.100116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.382 ms 00:40:32.148 [2024-10-15 02:09:41.100134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:41.109790] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:32.148 [2024-10-15 02:09:41.131989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:41.132042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:32.148 [2024-10-15 02:09:41.132067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.726 ms 00:40:32.148 [2024-10-15 02:09:41.132080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:41.132217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:41.132236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:32.148 [2024-10-15 02:09:41.132255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:40:32.148 [2024-10-15 02:09:41.132276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:41.132368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:41.132390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:32.148 [2024-10-15 02:09:41.132424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:40:32.148 [2024-10-15 02:09:41.132440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:41.132482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:41.132496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:32.148 [2024-10-15 02:09:41.132522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:40:32.148 [2024-10-15 02:09:41.132534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:41.132607] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:32.148 [2024-10-15 02:09:41.132624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.148 [2024-10-15 02:09:41.132641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:32.148 [2024-10-15 02:09:41.132653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:40:32.148 [2024-10-15 02:09:41.132668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.148 [2024-10-15 02:09:41.158482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.149 [2024-10-15 02:09:41.158533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:32.149 [2024-10-15 02:09:41.158552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.781 ms 00:40:32.149 [2024-10-15 02:09:41.158578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.149 [2024-10-15 02:09:41.158691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.149 [2024-10-15 02:09:41.158716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:32.149 [2024-10-15 02:09:41.158729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:40:32.149 [2024-10-15 02:09:41.158745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.407 [2024-10-15 02:09:41.160199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:32.407 [2024-10-15 02:09:41.163415] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 353.564 ms, result 0 00:40:32.407 [2024-10-15 02:09:41.164600] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:32.407 Some configs were skipped because the RPC state that can call them passed over. 00:40:32.407 02:09:41 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:40:32.666 [2024-10-15 02:09:41.459463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.666 [2024-10-15 02:09:41.459508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:40:32.666 [2024-10-15 02:09:41.459530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.379 ms 00:40:32.666 [2024-10-15 02:09:41.459542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.666 [2024-10-15 02:09:41.459591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.529 ms, result 0 00:40:32.666 true 00:40:32.666 02:09:41 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:40:32.666 [2024-10-15 02:09:41.667311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.666 [2024-10-15 02:09:41.667363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:40:32.666 [2024-10-15 02:09:41.667380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:40:32.666 [2024-10-15 02:09:41.667397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.666 [2024-10-15 02:09:41.667476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.208 ms, result 0 00:40:32.666 true 00:40:32.925 02:09:41 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76335 00:40:32.925 02:09:41 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76335 ']' 00:40:32.925 02:09:41 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76335 00:40:32.925 02:09:41 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:40:32.925 02:09:41 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:32.925 02:09:41 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76335 00:40:32.925 killing process with pid 76335 00:40:32.925 02:09:41 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:32.925 02:09:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:32.925 02:09:41 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76335' 00:40:32.925 02:09:41 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76335 00:40:32.925 02:09:41 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76335 00:40:33.864 [2024-10-15 02:09:42.607994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.608080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:33.864 [2024-10-15 02:09:42.608104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:33.864 [2024-10-15 02:09:42.608115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.608148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:40:33.864 [2024-10-15 02:09:42.611501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.611537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:33.864 [2024-10-15 02:09:42.611551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.333 ms 00:40:33.864 [2024-10-15 02:09:42.611563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.611814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.611832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:33.864 [2024-10-15 02:09:42.611846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:40:33.864 [2024-10-15 02:09:42.611858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.615191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.615233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:33.864 [2024-10-15 02:09:42.615248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.313 ms 00:40:33.864 [2024-10-15 02:09:42.615260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.620900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.620936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:33.864 [2024-10-15 02:09:42.620951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.600 ms 00:40:33.864 [2024-10-15 02:09:42.620962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.630824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.630866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:33.864 [2024-10-15 02:09:42.630880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.807 ms 00:40:33.864 [2024-10-15 02:09:42.630891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.639259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.639299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:33.864 [2024-10-15 02:09:42.639323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.329 ms 00:40:33.864 [2024-10-15 02:09:42.639335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.639496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.639519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:33.864 [2024-10-15 02:09:42.639542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:40:33.864 [2024-10-15 02:09:42.639558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.650124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.650166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:33.864 [2024-10-15 02:09:42.650181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.536 ms 00:40:33.864 [2024-10-15 02:09:42.650199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.660284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.660330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:33.864 [2024-10-15 02:09:42.660344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.033 ms 00:40:33.864 [2024-10-15 02:09:42.660360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.669938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.669980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:33.864 [2024-10-15 02:09:42.669994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.516 ms 00:40:33.864 [2024-10-15 02:09:42.670009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.679564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.864 [2024-10-15 02:09:42.679605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:33.864 [2024-10-15 02:09:42.679619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.487 ms 00:40:33.864 [2024-10-15 02:09:42.679634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.864 [2024-10-15 02:09:42.679672] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:33.864 [2024-10-15 02:09:42.679702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.679995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:33.864 [2024-10-15 02:09:42.680293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.680988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.681006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.681018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.681034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.681046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.681063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.681075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:33.865 [2024-10-15 02:09:42.681102] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:33.865 [2024-10-15 02:09:42.681114] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1339797-0eae-46ce-abba-f9aa2d840265 00:40:33.865 [2024-10-15 02:09:42.681130] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:33.865 [2024-10-15 02:09:42.681140] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:33.865 [2024-10-15 02:09:42.681155] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:33.865 [2024-10-15 02:09:42.681179] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:33.865 [2024-10-15 02:09:42.681196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:33.865 [2024-10-15 02:09:42.681213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:33.865 [2024-10-15 02:09:42.681229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:33.865 [2024-10-15 02:09:42.681239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:33.865 [2024-10-15 02:09:42.681253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:33.865 [2024-10-15 02:09:42.681263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.865 [2024-10-15 02:09:42.681278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:33.865 [2024-10-15 02:09:42.681290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:40:33.865 [2024-10-15 02:09:42.681307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.865 [2024-10-15 02:09:42.695560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.865 [2024-10-15 02:09:42.695608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:33.865 [2024-10-15 02:09:42.695623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.216 ms 00:40:33.865 [2024-10-15 02:09:42.695647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.865 [2024-10-15 02:09:42.696114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.865 [2024-10-15 02:09:42.696146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:33.865 [2024-10-15 02:09:42.696159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:40:33.865 [2024-10-15 02:09:42.696175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.865 [2024-10-15 02:09:42.742056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.865 [2024-10-15 02:09:42.742101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:33.865 [2024-10-15 02:09:42.742119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.865 [2024-10-15 02:09:42.742133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.865 [2024-10-15 02:09:42.742254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.865 [2024-10-15 02:09:42.742277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:33.865 [2024-10-15 02:09:42.742288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.865 [2024-10-15 02:09:42.742308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.865 [2024-10-15 02:09:42.742365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.865 [2024-10-15 02:09:42.742395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:33.865 [2024-10-15 02:09:42.742422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.865 [2024-10-15 02:09:42.742447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.865 [2024-10-15 02:09:42.742473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.865 [2024-10-15 02:09:42.742492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:33.865 [2024-10-15 02:09:42.742504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.865 [2024-10-15 02:09:42.742519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.865 [2024-10-15 02:09:42.831512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.865 [2024-10-15 02:09:42.831595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:33.865 [2024-10-15 02:09:42.831615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.865 [2024-10-15 02:09:42.831641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:34.124 [2024-10-15 02:09:42.904147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:34.124 [2024-10-15 02:09:42.904220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:34.124 [2024-10-15 02:09:42.904240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:34.124 [2024-10-15 02:09:42.904257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:34.124 [2024-10-15 02:09:42.904389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:34.124 [2024-10-15 02:09:42.904437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:34.124 [2024-10-15 02:09:42.904453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:34.124 [2024-10-15 02:09:42.904470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:34.124 [2024-10-15 02:09:42.904519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:34.124 [2024-10-15 02:09:42.904540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:34.124 [2024-10-15 02:09:42.904551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:34.124 [2024-10-15 02:09:42.904567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:34.124 [2024-10-15 02:09:42.904692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:34.124 [2024-10-15 02:09:42.904727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:34.124 [2024-10-15 02:09:42.904741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:34.124 [2024-10-15 02:09:42.904758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:34.124 [2024-10-15 02:09:42.904815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:34.124 [2024-10-15 02:09:42.904849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:34.124 [2024-10-15 02:09:42.904862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:34.124 [2024-10-15 02:09:42.904878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:34.124 [2024-10-15 02:09:42.904933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:34.124 [2024-10-15 02:09:42.904959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:34.124 [2024-10-15 02:09:42.904971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:34.124 [2024-10-15 02:09:42.904987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:34.124 [2024-10-15 02:09:42.905060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:34.124 [2024-10-15 02:09:42.905092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:34.124 [2024-10-15 02:09:42.905105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:34.124 [2024-10-15 02:09:42.905121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:34.124 [2024-10-15 02:09:42.905312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 297.287 ms, result 0 00:40:34.124 [2024-10-15 02:09:42.906971] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20000d8feb60 was disconnected and freed. delete nvme_qpair. 00:40:34.124 [2024-10-15 02:09:42.908206] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001c438ca0 was disconnected and freed. delete nvme_qpair. 00:40:34.124 [2024-10-15 02:09:42.912649] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035015720 was disconnected and freed. delete nvme_qpair. 00:40:35.058 02:09:43 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:40:35.059 02:09:43 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:35.059 [2024-10-15 02:09:43.969201] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:40:35.059 [2024-10-15 02:09:43.969385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76399 ] 00:40:35.317 [2024-10-15 02:09:44.138611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.575 [2024-10-15 02:09:44.349793] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:35.833 [2024-10-15 02:09:44.688682] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:35.833 [2024-10-15 02:09:44.688763] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:35.833 [2024-10-15 02:09:44.836288] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:40:36.093 [2024-10-15 02:09:44.849963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.093 [2024-10-15 02:09:44.850006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:36.093 [2024-10-15 02:09:44.850024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:40:36.093 [2024-10-15 02:09:44.850035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.093 [2024-10-15 02:09:44.853024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.093 [2024-10-15 02:09:44.853061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:36.093 [2024-10-15 02:09:44.853076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.965 ms 00:40:36.093 [2024-10-15 02:09:44.853089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.093 [2024-10-15 02:09:44.853190] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:36.093 [2024-10-15 02:09:44.853926] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:36.093 [2024-10-15 02:09:44.853957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.093 [2024-10-15 02:09:44.853972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:36.093 [2024-10-15 02:09:44.853982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:40:36.093 [2024-10-15 02:09:44.853992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.093 [2024-10-15 02:09:44.856524] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:36.093 [2024-10-15 02:09:44.870685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.093 [2024-10-15 02:09:44.870735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:36.093 [2024-10-15 02:09:44.870752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.163 ms 00:40:36.093 [2024-10-15 02:09:44.870762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.093 [2024-10-15 02:09:44.870865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.093 [2024-10-15 02:09:44.870888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:36.093 [2024-10-15 02:09:44.870900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:40:36.093 [2024-10-15 02:09:44.870910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.093 [2024-10-15 02:09:44.882454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.093 [2024-10-15 02:09:44.882492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:36.093 [2024-10-15 02:09:44.882506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.495 ms 00:40:36.093 [2024-10-15 02:09:44.882517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.093 [2024-10-15 02:09:44.882688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.093 [2024-10-15 02:09:44.882710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:36.093 [2024-10-15 02:09:44.882723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:40:36.093 [2024-10-15 02:09:44.882733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.093 [2024-10-15 02:09:44.882770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.093 [2024-10-15 02:09:44.882783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:36.093 [2024-10-15 02:09:44.882795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:36.093 [2024-10-15 02:09:44.882806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.093 [2024-10-15 02:09:44.882835] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:36.093 [2024-10-15 02:09:44.887552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.093 [2024-10-15 02:09:44.887587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:36.093 [2024-10-15 02:09:44.887601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.725 ms 00:40:36.093 [2024-10-15 02:09:44.887612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.093 [2024-10-15 02:09:44.887674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.093 [2024-10-15 02:09:44.887690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:36.093 [2024-10-15 02:09:44.887702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:36.093 [2024-10-15 02:09:44.887713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.093 [2024-10-15 02:09:44.887739] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:36.093 [2024-10-15 02:09:44.887766] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:36.093 [2024-10-15 02:09:44.887803] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:36.093 [2024-10-15 02:09:44.887825] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:36.093 [2024-10-15 02:09:44.887920] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:36.093 [2024-10-15 02:09:44.887935] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:36.093 [2024-10-15 02:09:44.887948] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:36.093 [2024-10-15 02:09:44.887962] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:36.093 [2024-10-15 02:09:44.887973] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:36.093 [2024-10-15 02:09:44.887984] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:36.093 [2024-10-15 02:09:44.887996] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:36.094 [2024-10-15 02:09:44.888007] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:36.094 [2024-10-15 02:09:44.888017] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:36.094 [2024-10-15 02:09:44.888028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.094 [2024-10-15 02:09:44.888042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:36.094 [2024-10-15 02:09:44.888053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:40:36.094 [2024-10-15 02:09:44.888063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.094 [2024-10-15 02:09:44.888145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.094 [2024-10-15 02:09:44.888158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:36.094 [2024-10-15 02:09:44.888169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:40:36.094 [2024-10-15 02:09:44.888179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.094 [2024-10-15 02:09:44.888272] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:36.094 [2024-10-15 02:09:44.888288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:36.094 [2024-10-15 02:09:44.888304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:36.094 [2024-10-15 02:09:44.888315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:36.094 [2024-10-15 02:09:44.888334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:36.094 [2024-10-15 02:09:44.888356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:36.094 [2024-10-15 02:09:44.888366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:36.094 [2024-10-15 02:09:44.888396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:36.094 [2024-10-15 02:09:44.888424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:36.094 [2024-10-15 02:09:44.888437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:36.094 [2024-10-15 02:09:44.888446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:36.094 [2024-10-15 02:09:44.888456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:36.094 [2024-10-15 02:09:44.888465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:36.094 [2024-10-15 02:09:44.888483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:36.094 [2024-10-15 02:09:44.888493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:36.094 [2024-10-15 02:09:44.888511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:36.094 [2024-10-15 02:09:44.888531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:36.094 [2024-10-15 02:09:44.888540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:36.094 [2024-10-15 02:09:44.888558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:36.094 [2024-10-15 02:09:44.888567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:36.094 [2024-10-15 02:09:44.888584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:36.094 [2024-10-15 02:09:44.888593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:36.094 [2024-10-15 02:09:44.888610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:36.094 [2024-10-15 02:09:44.888619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:36.094 [2024-10-15 02:09:44.888637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:36.094 [2024-10-15 02:09:44.888646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:36.094 [2024-10-15 02:09:44.888655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:36.094 [2024-10-15 02:09:44.888664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:36.094 [2024-10-15 02:09:44.888675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:36.094 [2024-10-15 02:09:44.888685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:36.094 [2024-10-15 02:09:44.888703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:36.094 [2024-10-15 02:09:44.888713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888721] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:36.094 [2024-10-15 02:09:44.888732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:36.094 [2024-10-15 02:09:44.888742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:36.094 [2024-10-15 02:09:44.888753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:36.094 [2024-10-15 02:09:44.888763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:36.094 [2024-10-15 02:09:44.888773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:36.094 [2024-10-15 02:09:44.888782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:36.094 [2024-10-15 02:09:44.888792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:36.094 [2024-10-15 02:09:44.888800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:36.094 [2024-10-15 02:09:44.888810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:36.094 [2024-10-15 02:09:44.888821] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:36.094 [2024-10-15 02:09:44.888840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:36.094 [2024-10-15 02:09:44.888851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:36.094 [2024-10-15 02:09:44.888861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:36.094 [2024-10-15 02:09:44.888870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:36.094 [2024-10-15 02:09:44.888880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:36.094 [2024-10-15 02:09:44.888890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:36.094 [2024-10-15 02:09:44.888900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:36.094 [2024-10-15 02:09:44.888909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:36.094 [2024-10-15 02:09:44.888919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:36.094 [2024-10-15 02:09:44.888929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:36.094 [2024-10-15 02:09:44.888939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:36.094 [2024-10-15 02:09:44.888948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:36.094 [2024-10-15 02:09:44.888958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:36.094 [2024-10-15 02:09:44.888967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:36.094 [2024-10-15 02:09:44.888977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:36.094 [2024-10-15 02:09:44.888987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:36.094 [2024-10-15 02:09:44.888999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:36.094 [2024-10-15 02:09:44.889014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:36.094 [2024-10-15 02:09:44.889024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:36.094 [2024-10-15 02:09:44.889034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:36.094 [2024-10-15 02:09:44.889046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:36.094 [2024-10-15 02:09:44.889057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.094 [2024-10-15 02:09:44.889068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:36.094 [2024-10-15 02:09:44.889079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:40:36.094 [2024-10-15 02:09:44.889089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.094 [2024-10-15 02:09:44.945896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.094 [2024-10-15 02:09:44.945963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:36.094 [2024-10-15 02:09:44.945983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.738 ms 00:40:36.094 [2024-10-15 02:09:44.945994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.094 [2024-10-15 02:09:44.946230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.094 [2024-10-15 02:09:44.946267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:36.094 [2024-10-15 02:09:44.946280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:40:36.094 [2024-10-15 02:09:44.946291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.094 [2024-10-15 02:09:44.987865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.094 [2024-10-15 02:09:44.987913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:36.094 [2024-10-15 02:09:44.987929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.543 ms 00:40:36.094 [2024-10-15 02:09:44.987941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.094 [2024-10-15 02:09:44.988035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.094 [2024-10-15 02:09:44.988052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:36.094 [2024-10-15 02:09:44.988065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:36.094 [2024-10-15 02:09:44.988081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.094 [2024-10-15 02:09:44.988828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.094 [2024-10-15 02:09:44.988852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:36.095 [2024-10-15 02:09:44.988866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:40:36.095 [2024-10-15 02:09:44.988877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.095 [2024-10-15 02:09:44.989043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.095 [2024-10-15 02:09:44.989061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:36.095 [2024-10-15 02:09:44.989073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:40:36.095 [2024-10-15 02:09:44.989084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.095 [2024-10-15 02:09:45.006934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.095 [2024-10-15 02:09:45.006970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:36.095 [2024-10-15 02:09:45.006985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.817 ms 00:40:36.095 [2024-10-15 02:09:45.007001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.095 [2024-10-15 02:09:45.021247] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:40:36.095 [2024-10-15 02:09:45.021285] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:36.095 [2024-10-15 02:09:45.021302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.095 [2024-10-15 02:09:45.021314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:36.095 [2024-10-15 02:09:45.021327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.177 ms 00:40:36.095 [2024-10-15 02:09:45.021338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.095 [2024-10-15 02:09:45.045084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.095 [2024-10-15 02:09:45.045121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:36.095 [2024-10-15 02:09:45.045143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.651 ms 00:40:36.095 [2024-10-15 02:09:45.045155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.095 [2024-10-15 02:09:45.057488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.095 [2024-10-15 02:09:45.057524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:36.095 [2024-10-15 02:09:45.057538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.242 ms 00:40:36.095 [2024-10-15 02:09:45.057549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.095 [2024-10-15 02:09:45.069579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.095 [2024-10-15 02:09:45.069614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:36.095 [2024-10-15 02:09:45.069629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.950 ms 00:40:36.095 [2024-10-15 02:09:45.069639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.095 [2024-10-15 02:09:45.070288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.095 [2024-10-15 02:09:45.070315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:36.095 [2024-10-15 02:09:45.070329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:40:36.095 [2024-10-15 02:09:45.070339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.353 [2024-10-15 02:09:45.141537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.353 [2024-10-15 02:09:45.141615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:36.353 [2024-10-15 02:09:45.141636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.161 ms 00:40:36.353 [2024-10-15 02:09:45.141655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.353 [2024-10-15 02:09:45.151326] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:36.353 [2024-10-15 02:09:45.173532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.353 [2024-10-15 02:09:45.173583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:36.354 [2024-10-15 02:09:45.173601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.767 ms 00:40:36.354 [2024-10-15 02:09:45.173612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.354 [2024-10-15 02:09:45.173776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.354 [2024-10-15 02:09:45.173795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:36.354 [2024-10-15 02:09:45.173809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:36.354 [2024-10-15 02:09:45.173819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.354 [2024-10-15 02:09:45.173917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.354 [2024-10-15 02:09:45.173941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:36.354 [2024-10-15 02:09:45.173954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:40:36.354 [2024-10-15 02:09:45.173965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.354 [2024-10-15 02:09:45.174001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.354 [2024-10-15 02:09:45.174016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:36.354 [2024-10-15 02:09:45.174028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:36.354 [2024-10-15 02:09:45.174038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.354 [2024-10-15 02:09:45.174085] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:36.354 [2024-10-15 02:09:45.174099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.354 [2024-10-15 02:09:45.174115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:36.354 [2024-10-15 02:09:45.174126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:40:36.354 [2024-10-15 02:09:45.174137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.354 [2024-10-15 02:09:45.200044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.354 [2024-10-15 02:09:45.200086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:36.354 [2024-10-15 02:09:45.200102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.877 ms 00:40:36.354 [2024-10-15 02:09:45.200113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.354 [2024-10-15 02:09:45.200231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:36.354 [2024-10-15 02:09:45.200250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:36.354 [2024-10-15 02:09:45.200263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:40:36.354 [2024-10-15 02:09:45.200274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:36.354 [2024-10-15 02:09:45.201677] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:36.354 [2024-10-15 02:09:45.204895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.326 ms, result 0 00:40:36.354 [2024-10-15 02:09:45.205741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:36.354 [2024-10-15 02:09:45.219315] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:37.288  [2024-10-15T02:09:47.234Z] Copying: 28/256 [MB] (28 MBps) [2024-10-15T02:09:48.607Z] Copying: 54/256 [MB] (25 MBps) [2024-10-15T02:09:49.541Z] Copying: 80/256 [MB] (25 MBps) [2024-10-15T02:09:50.474Z] Copying: 106/256 [MB] (25 MBps) [2024-10-15T02:09:51.408Z] Copying: 131/256 [MB] (25 MBps) [2024-10-15T02:09:52.342Z] Copying: 156/256 [MB] (25 MBps) [2024-10-15T02:09:53.311Z] Copying: 181/256 [MB] (25 MBps) [2024-10-15T02:09:54.246Z] Copying: 207/256 [MB] (25 MBps) [2024-10-15T02:09:55.181Z] Copying: 232/256 [MB] (25 MBps) [2024-10-15T02:09:55.181Z] Copying: 256/256 [MB] (average 25 MBps)[2024-10-15 02:09:55.150338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:46.169 [2024-10-15 02:09:55.161245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.169 [2024-10-15 02:09:55.161282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:46.169 [2024-10-15 02:09:55.161301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:46.169 [2024-10-15 02:09:55.161312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.169 [2024-10-15 02:09:55.161338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:40:46.169 [2024-10-15 02:09:55.164847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.169 [2024-10-15 02:09:55.165110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:46.169 [2024-10-15 02:09:55.165135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.490 ms 00:40:46.169 [2024-10-15 02:09:55.165147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.169 [2024-10-15 02:09:55.165450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.169 [2024-10-15 02:09:55.165493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:46.169 [2024-10-15 02:09:55.165506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:40:46.169 [2024-10-15 02:09:55.165516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.169 [2024-10-15 02:09:55.168610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.169 [2024-10-15 02:09:55.168639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:46.169 [2024-10-15 02:09:55.168651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.074 ms 00:40:46.169 [2024-10-15 02:09:55.168662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.169 [2024-10-15 02:09:55.175047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.169 [2024-10-15 02:09:55.175200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:46.169 [2024-10-15 02:09:55.175236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.364 ms 00:40:46.169 [2024-10-15 02:09:55.175248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.429 [2024-10-15 02:09:55.200700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.429 [2024-10-15 02:09:55.200861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:46.429 [2024-10-15 02:09:55.200886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.370 ms 00:40:46.429 [2024-10-15 02:09:55.200897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.429 [2024-10-15 02:09:55.216559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.429 [2024-10-15 02:09:55.216597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:46.429 [2024-10-15 02:09:55.216612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.618 ms 00:40:46.429 [2024-10-15 02:09:55.216623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.429 [2024-10-15 02:09:55.216752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.429 [2024-10-15 02:09:55.216771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:46.429 [2024-10-15 02:09:55.216782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:40:46.429 [2024-10-15 02:09:55.216805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.429 [2024-10-15 02:09:55.241588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.429 [2024-10-15 02:09:55.241770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:46.429 [2024-10-15 02:09:55.241794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.762 ms 00:40:46.429 [2024-10-15 02:09:55.241806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.429 [2024-10-15 02:09:55.266270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.429 [2024-10-15 02:09:55.266306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:46.429 [2024-10-15 02:09:55.266320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.420 ms 00:40:46.429 [2024-10-15 02:09:55.266330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.429 [2024-10-15 02:09:55.290709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.429 [2024-10-15 02:09:55.290751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:46.429 [2024-10-15 02:09:55.290766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.340 ms 00:40:46.429 [2024-10-15 02:09:55.290776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.429 [2024-10-15 02:09:55.314259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.429 [2024-10-15 02:09:55.314437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:46.429 [2024-10-15 02:09:55.314461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.410 ms 00:40:46.429 [2024-10-15 02:09:55.314471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.429 [2024-10-15 02:09:55.314515] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:46.429 [2024-10-15 02:09:55.314548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.314990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:46.429 [2024-10-15 02:09:55.315213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:46.430 [2024-10-15 02:09:55.315659] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:46.430 [2024-10-15 02:09:55.315670] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1339797-0eae-46ce-abba-f9aa2d840265 00:40:46.430 [2024-10-15 02:09:55.315680] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:46.430 [2024-10-15 02:09:55.315690] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:46.430 [2024-10-15 02:09:55.315708] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:46.430 [2024-10-15 02:09:55.315718] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:46.430 [2024-10-15 02:09:55.315728] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:46.430 [2024-10-15 02:09:55.315738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:46.430 [2024-10-15 02:09:55.315748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:46.430 [2024-10-15 02:09:55.315756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:46.430 [2024-10-15 02:09:55.315765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:46.430 [2024-10-15 02:09:55.315775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.430 [2024-10-15 02:09:55.315785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:46.430 [2024-10-15 02:09:55.315796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:40:46.430 [2024-10-15 02:09:55.315806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.430 [2024-10-15 02:09:55.330203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.430 [2024-10-15 02:09:55.330237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:46.430 [2024-10-15 02:09:55.330251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.356 ms 00:40:46.430 [2024-10-15 02:09:55.330262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.430 [2024-10-15 02:09:55.330734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:46.430 [2024-10-15 02:09:55.330752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:46.430 [2024-10-15 02:09:55.330764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:40:46.430 [2024-10-15 02:09:55.330774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.430 [2024-10-15 02:09:55.366634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.430 [2024-10-15 02:09:55.366674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:46.430 [2024-10-15 02:09:55.366690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.430 [2024-10-15 02:09:55.366701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.430 [2024-10-15 02:09:55.366795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.430 [2024-10-15 02:09:55.366813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:46.430 [2024-10-15 02:09:55.366824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.430 [2024-10-15 02:09:55.366834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.430 [2024-10-15 02:09:55.366889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.430 [2024-10-15 02:09:55.366917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:46.430 [2024-10-15 02:09:55.366929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.430 [2024-10-15 02:09:55.366940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.430 [2024-10-15 02:09:55.366965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.430 [2024-10-15 02:09:55.366977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:46.430 [2024-10-15 02:09:55.366988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.430 [2024-10-15 02:09:55.366998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.689 [2024-10-15 02:09:55.455530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.689 [2024-10-15 02:09:55.455802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:46.689 [2024-10-15 02:09:55.455829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.689 [2024-10-15 02:09:55.455841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.689 [2024-10-15 02:09:55.528012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.689 [2024-10-15 02:09:55.528071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:46.689 [2024-10-15 02:09:55.528089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.689 [2024-10-15 02:09:55.528100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.689 [2024-10-15 02:09:55.528188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.689 [2024-10-15 02:09:55.528205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:46.689 [2024-10-15 02:09:55.528232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.689 [2024-10-15 02:09:55.528242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.689 [2024-10-15 02:09:55.528280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.689 [2024-10-15 02:09:55.528293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:46.689 [2024-10-15 02:09:55.528304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.689 [2024-10-15 02:09:55.528315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.689 [2024-10-15 02:09:55.528450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.689 [2024-10-15 02:09:55.528470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:46.689 [2024-10-15 02:09:55.528495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.689 [2024-10-15 02:09:55.528506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.689 [2024-10-15 02:09:55.528556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.689 [2024-10-15 02:09:55.528573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:46.689 [2024-10-15 02:09:55.528585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.689 [2024-10-15 02:09:55.528597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.689 [2024-10-15 02:09:55.528653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.689 [2024-10-15 02:09:55.528668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:46.689 [2024-10-15 02:09:55.528679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.689 [2024-10-15 02:09:55.528702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.689 [2024-10-15 02:09:55.528763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:46.689 [2024-10-15 02:09:55.528779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:46.689 [2024-10-15 02:09:55.528790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:46.689 [2024-10-15 02:09:55.528800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:46.689 [2024-10-15 02:09:55.529046] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.770 ms, result 0 00:40:46.689 [2024-10-15 02:09:55.530677] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019efcce0 was disconnected and freed. delete nvme_qpair. 00:40:46.689 [2024-10-15 02:09:55.532082] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001992aca0 was disconnected and freed. delete nvme_qpair. 00:40:46.689 [2024-10-15 02:09:55.536600] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:40:47.624 00:40:47.624 00:40:47.624 02:09:56 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:40:47.624 02:09:56 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:40:48.191 02:09:57 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:48.191 [2024-10-15 02:09:57.168042] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:40:48.191 [2024-10-15 02:09:57.168754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76540 ] 00:40:48.450 [2024-10-15 02:09:57.343174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.708 [2024-10-15 02:09:57.568379] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.966 [2024-10-15 02:09:57.905400] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:48.966 [2024-10-15 02:09:57.905510] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:49.225 [2024-10-15 02:09:58.053947] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:40:49.225 [2024-10-15 02:09:58.067670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.067715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:49.225 [2024-10-15 02:09:58.067735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:40:49.225 [2024-10-15 02:09:58.067746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.070829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.070869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:49.225 [2024-10-15 02:09:58.070884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.059 ms 00:40:49.225 [2024-10-15 02:09:58.070898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.071025] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:49.225 [2024-10-15 02:09:58.071884] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:49.225 [2024-10-15 02:09:58.071922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.071955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:49.225 [2024-10-15 02:09:58.071966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:40:49.225 [2024-10-15 02:09:58.071976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.074544] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:49.225 [2024-10-15 02:09:58.088754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.088792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:49.225 [2024-10-15 02:09:58.088808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.211 ms 00:40:49.225 [2024-10-15 02:09:58.088818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.088921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.088943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:49.225 [2024-10-15 02:09:58.088956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:40:49.225 [2024-10-15 02:09:58.088966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.100557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.100600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:49.225 [2024-10-15 02:09:58.100628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.544 ms 00:40:49.225 [2024-10-15 02:09:58.100638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.100780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.100799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:49.225 [2024-10-15 02:09:58.100812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:40:49.225 [2024-10-15 02:09:58.100822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.100858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.100871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:49.225 [2024-10-15 02:09:58.100882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:49.225 [2024-10-15 02:09:58.100892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.100923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:49.225 [2024-10-15 02:09:58.105663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.105698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:49.225 [2024-10-15 02:09:58.105711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.750 ms 00:40:49.225 [2024-10-15 02:09:58.105721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.105783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.105800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:49.225 [2024-10-15 02:09:58.105811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:49.225 [2024-10-15 02:09:58.105821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.105846] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:49.225 [2024-10-15 02:09:58.105874] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:49.225 [2024-10-15 02:09:58.105915] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:49.225 [2024-10-15 02:09:58.105938] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:49.225 [2024-10-15 02:09:58.106033] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:49.225 [2024-10-15 02:09:58.106047] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:49.225 [2024-10-15 02:09:58.106061] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:49.225 [2024-10-15 02:09:58.106074] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:49.225 [2024-10-15 02:09:58.106086] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:49.225 [2024-10-15 02:09:58.106096] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:49.225 [2024-10-15 02:09:58.106107] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:49.225 [2024-10-15 02:09:58.106118] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:49.225 [2024-10-15 02:09:58.106129] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:49.225 [2024-10-15 02:09:58.106140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.106155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:49.225 [2024-10-15 02:09:58.106165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:40:49.225 [2024-10-15 02:09:58.106175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.106257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.225 [2024-10-15 02:09:58.106271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:49.225 [2024-10-15 02:09:58.106282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:40:49.225 [2024-10-15 02:09:58.106292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.225 [2024-10-15 02:09:58.106387] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:49.225 [2024-10-15 02:09:58.106420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:49.225 [2024-10-15 02:09:58.106441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:49.225 [2024-10-15 02:09:58.106452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:49.225 [2024-10-15 02:09:58.106471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:49.225 [2024-10-15 02:09:58.106490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:49.225 [2024-10-15 02:09:58.106499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:49.225 [2024-10-15 02:09:58.106540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:49.225 [2024-10-15 02:09:58.106551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:49.225 [2024-10-15 02:09:58.106563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:49.225 [2024-10-15 02:09:58.106573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:49.225 [2024-10-15 02:09:58.106583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:49.225 [2024-10-15 02:09:58.106592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:49.225 [2024-10-15 02:09:58.106610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:49.225 [2024-10-15 02:09:58.106620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:49.225 [2024-10-15 02:09:58.106639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:49.225 [2024-10-15 02:09:58.106657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:49.225 [2024-10-15 02:09:58.106666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:49.225 [2024-10-15 02:09:58.106684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:49.225 [2024-10-15 02:09:58.106693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:49.225 [2024-10-15 02:09:58.106711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:49.225 [2024-10-15 02:09:58.106720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:49.225 [2024-10-15 02:09:58.106738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:49.225 [2024-10-15 02:09:58.106747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:49.225 [2024-10-15 02:09:58.106766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:49.225 [2024-10-15 02:09:58.106775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:49.225 [2024-10-15 02:09:58.106784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:49.225 [2024-10-15 02:09:58.106793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:49.225 [2024-10-15 02:09:58.106802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:49.225 [2024-10-15 02:09:58.106811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:49.225 [2024-10-15 02:09:58.106829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:49.225 [2024-10-15 02:09:58.106840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106849] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:49.225 [2024-10-15 02:09:58.106862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:49.225 [2024-10-15 02:09:58.106874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:49.225 [2024-10-15 02:09:58.106884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:49.225 [2024-10-15 02:09:58.106895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:49.225 [2024-10-15 02:09:58.106905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:49.225 [2024-10-15 02:09:58.106914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:49.225 [2024-10-15 02:09:58.106924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:49.225 [2024-10-15 02:09:58.106933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:49.225 [2024-10-15 02:09:58.106943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:49.226 [2024-10-15 02:09:58.106954] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:49.226 [2024-10-15 02:09:58.106973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:49.226 [2024-10-15 02:09:58.106985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:49.226 [2024-10-15 02:09:58.106995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:49.226 [2024-10-15 02:09:58.107005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:49.226 [2024-10-15 02:09:58.107015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:49.226 [2024-10-15 02:09:58.107026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:49.226 [2024-10-15 02:09:58.107035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:49.226 [2024-10-15 02:09:58.107045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:49.226 [2024-10-15 02:09:58.107056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:49.226 [2024-10-15 02:09:58.107066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:49.226 [2024-10-15 02:09:58.107077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:49.226 [2024-10-15 02:09:58.107087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:49.226 [2024-10-15 02:09:58.107097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:49.226 [2024-10-15 02:09:58.107107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:49.226 [2024-10-15 02:09:58.107117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:49.226 [2024-10-15 02:09:58.107128] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:49.226 [2024-10-15 02:09:58.107140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:49.226 [2024-10-15 02:09:58.107156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:49.226 [2024-10-15 02:09:58.107166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:49.226 [2024-10-15 02:09:58.107176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:49.226 [2024-10-15 02:09:58.107187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:49.226 [2024-10-15 02:09:58.107198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.226 [2024-10-15 02:09:58.107211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:49.226 [2024-10-15 02:09:58.107222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:40:49.226 [2024-10-15 02:09:58.107232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.226 [2024-10-15 02:09:58.155366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.226 [2024-10-15 02:09:58.155683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:49.226 [2024-10-15 02:09:58.155804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.060 ms 00:40:49.226 [2024-10-15 02:09:58.155853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.226 [2024-10-15 02:09:58.156110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.226 [2024-10-15 02:09:58.156171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:49.226 [2024-10-15 02:09:58.156315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:40:49.226 [2024-10-15 02:09:58.156470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.226 [2024-10-15 02:09:58.198198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.226 [2024-10-15 02:09:58.198415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:49.226 [2024-10-15 02:09:58.198570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.651 ms 00:40:49.226 [2024-10-15 02:09:58.198637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.226 [2024-10-15 02:09:58.198825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.226 [2024-10-15 02:09:58.198913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:49.226 [2024-10-15 02:09:58.198967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:49.226 [2024-10-15 02:09:58.199105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.226 [2024-10-15 02:09:58.199928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.226 [2024-10-15 02:09:58.200066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:49.226 [2024-10-15 02:09:58.200191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:40:49.226 [2024-10-15 02:09:58.200236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.226 [2024-10-15 02:09:58.200472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.226 [2024-10-15 02:09:58.200531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:49.226 [2024-10-15 02:09:58.200648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:40:49.226 [2024-10-15 02:09:58.200836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.226 [2024-10-15 02:09:58.218953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.226 [2024-10-15 02:09:58.219103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:49.226 [2024-10-15 02:09:58.219234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.036 ms 00:40:49.226 [2024-10-15 02:09:58.219288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.226 [2024-10-15 02:09:58.233646] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:40:49.226 [2024-10-15 02:09:58.233821] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:49.226 [2024-10-15 02:09:58.233945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.226 [2024-10-15 02:09:58.233985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:49.226 [2024-10-15 02:09:58.234019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.468 ms 00:40:49.226 [2024-10-15 02:09:58.234115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.257712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.257865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:49.484 [2024-10-15 02:09:58.257978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.476 ms 00:40:49.484 [2024-10-15 02:09:58.258021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.270443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.270600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:49.484 [2024-10-15 02:09:58.270702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.293 ms 00:40:49.484 [2024-10-15 02:09:58.270745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.282754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.282905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:49.484 [2024-10-15 02:09:58.283007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.900 ms 00:40:49.484 [2024-10-15 02:09:58.283050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.283757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.283895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:49.484 [2024-10-15 02:09:58.284018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:40:49.484 [2024-10-15 02:09:58.284063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.354974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.355294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:49.484 [2024-10-15 02:09:58.355415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.839 ms 00:40:49.484 [2024-10-15 02:09:58.355496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.365311] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:49.484 [2024-10-15 02:09:58.387761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.387995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:49.484 [2024-10-15 02:09:58.388027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.121 ms 00:40:49.484 [2024-10-15 02:09:58.388039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.388182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.388201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:49.484 [2024-10-15 02:09:58.388215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:40:49.484 [2024-10-15 02:09:58.388235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.388337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.388352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:49.484 [2024-10-15 02:09:58.388363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:40:49.484 [2024-10-15 02:09:58.388373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.388407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.388420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:49.484 [2024-10-15 02:09:58.388452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:40:49.484 [2024-10-15 02:09:58.388466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.388518] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:49.484 [2024-10-15 02:09:58.388538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.388549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:49.484 [2024-10-15 02:09:58.388560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:40:49.484 [2024-10-15 02:09:58.388570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.414349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.414524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:49.484 [2024-10-15 02:09:58.414582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.753 ms 00:40:49.484 [2024-10-15 02:09:58.414602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.414720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.484 [2024-10-15 02:09:58.414739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:49.484 [2024-10-15 02:09:58.414752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:40:49.484 [2024-10-15 02:09:58.414763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.484 [2024-10-15 02:09:58.416495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:49.484 [2024-10-15 02:09:58.419797] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 348.183 ms, result 0 00:40:49.484 [2024-10-15 02:09:58.420757] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:49.484 [2024-10-15 02:09:58.434397] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:49.743  [2024-10-15T02:09:58.755Z] Copying: 4096/4096 [kB] (average 23 MBps)[2024-10-15 02:09:58.608639] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:49.743 [2024-10-15 02:09:58.617904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.743 [2024-10-15 02:09:58.617941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:49.743 [2024-10-15 02:09:58.617955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:49.743 [2024-10-15 02:09:58.617965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.743 [2024-10-15 02:09:58.617989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:40:49.743 [2024-10-15 02:09:58.621383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.743 [2024-10-15 02:09:58.621422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:49.743 [2024-10-15 02:09:58.621435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.377 ms 00:40:49.743 [2024-10-15 02:09:58.621444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.743 [2024-10-15 02:09:58.623346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.743 [2024-10-15 02:09:58.623509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:49.743 [2024-10-15 02:09:58.623534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.869 ms 00:40:49.743 [2024-10-15 02:09:58.623545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.743 [2024-10-15 02:09:58.626771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.743 [2024-10-15 02:09:58.626912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:49.743 [2024-10-15 02:09:58.626951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.201 ms 00:40:49.743 [2024-10-15 02:09:58.626962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.743 [2024-10-15 02:09:58.632790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.743 [2024-10-15 02:09:58.632827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:49.743 [2024-10-15 02:09:58.632840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.741 ms 00:40:49.743 [2024-10-15 02:09:58.632848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.743 [2024-10-15 02:09:58.656733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.743 [2024-10-15 02:09:58.656773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:49.743 [2024-10-15 02:09:58.656787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.824 ms 00:40:49.743 [2024-10-15 02:09:58.656796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.743 [2024-10-15 02:09:58.672167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.743 [2024-10-15 02:09:58.672206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:49.743 [2024-10-15 02:09:58.672220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.333 ms 00:40:49.743 [2024-10-15 02:09:58.672229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.743 [2024-10-15 02:09:58.672352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.743 [2024-10-15 02:09:58.672369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:49.743 [2024-10-15 02:09:58.672387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:40:49.743 [2024-10-15 02:09:58.672397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.743 [2024-10-15 02:09:58.697161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.743 [2024-10-15 02:09:58.697199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:49.743 [2024-10-15 02:09:58.697213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.723 ms 00:40:49.743 [2024-10-15 02:09:58.697222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.743 [2024-10-15 02:09:58.722719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.743 [2024-10-15 02:09:58.722909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:49.743 [2024-10-15 02:09:58.722933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.458 ms 00:40:49.743 [2024-10-15 02:09:58.722943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.743 [2024-10-15 02:09:58.748099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:49.743 [2024-10-15 02:09:58.748137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:49.743 [2024-10-15 02:09:58.748168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.098 ms 00:40:49.743 [2024-10-15 02:09:58.748177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.003 [2024-10-15 02:09:58.772152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.003 [2024-10-15 02:09:58.772191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:50.003 [2024-10-15 02:09:58.772220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.893 ms 00:40:50.003 [2024-10-15 02:09:58.772229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.003 [2024-10-15 02:09:58.772269] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:50.003 [2024-10-15 02:09:58.772290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.772999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:50.003 [2024-10-15 02:09:58.773145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:50.004 [2024-10-15 02:09:58.773394] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:50.004 [2024-10-15 02:09:58.773414] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1339797-0eae-46ce-abba-f9aa2d840265 00:40:50.004 [2024-10-15 02:09:58.773427] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:50.004 [2024-10-15 02:09:58.773443] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:50.004 [2024-10-15 02:09:58.773453] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:50.004 [2024-10-15 02:09:58.773463] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:50.004 [2024-10-15 02:09:58.773472] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:50.004 [2024-10-15 02:09:58.773483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:50.004 [2024-10-15 02:09:58.773493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:50.004 [2024-10-15 02:09:58.773501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:50.004 [2024-10-15 02:09:58.773510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:50.004 [2024-10-15 02:09:58.773520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.004 [2024-10-15 02:09:58.773530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:50.004 [2024-10-15 02:09:58.773540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:40:50.004 [2024-10-15 02:09:58.773550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.788141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.004 [2024-10-15 02:09:58.788179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:50.004 [2024-10-15 02:09:58.788209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.554 ms 00:40:50.004 [2024-10-15 02:09:58.788220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.788699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.004 [2024-10-15 02:09:58.788720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:50.004 [2024-10-15 02:09:58.788731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:40:50.004 [2024-10-15 02:09:58.788741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.825126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.825173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:50.004 [2024-10-15 02:09:58.825204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.825215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.825330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.825347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:50.004 [2024-10-15 02:09:58.825358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.825368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.825452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.825470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:50.004 [2024-10-15 02:09:58.825483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.825493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.825528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.825541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:50.004 [2024-10-15 02:09:58.825552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.825562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.915240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.915317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:50.004 [2024-10-15 02:09:58.915336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.915347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.987392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.987469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:50.004 [2024-10-15 02:09:58.987489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.987500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.987612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.987637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:50.004 [2024-10-15 02:09:58.987648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.987660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.987697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.987710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:50.004 [2024-10-15 02:09:58.987722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.987731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.987849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.987871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:50.004 [2024-10-15 02:09:58.987883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.987893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.987944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.987967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:50.004 [2024-10-15 02:09:58.987979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.987989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.988043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.988057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:50.004 [2024-10-15 02:09:58.988074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.988084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.988144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:50.004 [2024-10-15 02:09:58.988159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:50.004 [2024-10-15 02:09:58.988171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:50.004 [2024-10-15 02:09:58.988181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.004 [2024-10-15 02:09:58.988370] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.436 ms, result 0 00:40:50.004 [2024-10-15 02:09:58.989994] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019efcce0 was disconnected and freed. delete nvme_qpair. 00:40:50.004 [2024-10-15 02:09:58.991314] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001992aca0 was disconnected and freed. delete nvme_qpair. 00:40:50.004 [2024-10-15 02:09:58.995652] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:40:51.380 00:40:51.380 00:40:51.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:51.380 02:10:00 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76575 00:40:51.380 02:10:00 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:40:51.380 02:10:00 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76575 00:40:51.380 02:10:00 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76575 ']' 00:40:51.380 02:10:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:51.380 02:10:00 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:51.380 02:10:00 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:51.380 02:10:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:51.380 02:10:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:40:51.380 [2024-10-15 02:10:00.159813] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:40:51.380 [2024-10-15 02:10:00.160210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76575 ] 00:40:51.380 [2024-10-15 02:10:00.334070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:51.639 [2024-10-15 02:10:00.539638] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:52.574 02:10:01 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:52.574 02:10:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:40:52.574 02:10:01 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:40:52.833 [2024-10-15 02:10:01.616415] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:52.833 [2024-10-15 02:10:01.616495] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:52.833 [2024-10-15 02:10:01.789022] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:40:52.833 [2024-10-15 02:10:01.802652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.802707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:52.833 [2024-10-15 02:10:01.802729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:52.833 [2024-10-15 02:10:01.802755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.833 [2024-10-15 02:10:01.806432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.806473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:52.833 [2024-10-15 02:10:01.806489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.649 ms 00:40:52.833 [2024-10-15 02:10:01.806503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.833 [2024-10-15 02:10:01.806661] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:52.833 [2024-10-15 02:10:01.807530] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:52.833 [2024-10-15 02:10:01.807584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.807601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:52.833 [2024-10-15 02:10:01.807618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:40:52.833 [2024-10-15 02:10:01.807636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.833 [2024-10-15 02:10:01.810224] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:52.833 [2024-10-15 02:10:01.824549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.824752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:52.833 [2024-10-15 02:10:01.824793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.320 ms 00:40:52.833 [2024-10-15 02:10:01.824809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.833 [2024-10-15 02:10:01.824938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.824958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:52.833 [2024-10-15 02:10:01.824974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:40:52.833 [2024-10-15 02:10:01.824986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.833 [2024-10-15 02:10:01.836717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.836759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:52.833 [2024-10-15 02:10:01.836782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.668 ms 00:40:52.833 [2024-10-15 02:10:01.836793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.833 [2024-10-15 02:10:01.836996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.837016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:52.833 [2024-10-15 02:10:01.837034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:40:52.833 [2024-10-15 02:10:01.837045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.833 [2024-10-15 02:10:01.837091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.837106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:52.833 [2024-10-15 02:10:01.837122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:52.833 [2024-10-15 02:10:01.837133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.833 [2024-10-15 02:10:01.837183] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:52.833 [2024-10-15 02:10:01.842013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.842054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:52.833 [2024-10-15 02:10:01.842075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.847 ms 00:40:52.833 [2024-10-15 02:10:01.842096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.833 [2024-10-15 02:10:01.842157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.842181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:52.833 [2024-10-15 02:10:01.842195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:52.833 [2024-10-15 02:10:01.842211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.833 [2024-10-15 02:10:01.842241] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:52.833 [2024-10-15 02:10:01.842277] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:52.833 [2024-10-15 02:10:01.842335] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:52.833 [2024-10-15 02:10:01.842366] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:52.833 [2024-10-15 02:10:01.842483] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:52.833 [2024-10-15 02:10:01.842509] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:52.833 [2024-10-15 02:10:01.842526] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:52.833 [2024-10-15 02:10:01.842560] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:52.833 [2024-10-15 02:10:01.842573] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:52.833 [2024-10-15 02:10:01.842590] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:52.833 [2024-10-15 02:10:01.842602] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:52.833 [2024-10-15 02:10:01.842631] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:52.833 [2024-10-15 02:10:01.842642] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:52.833 [2024-10-15 02:10:01.842659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.842670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:52.833 [2024-10-15 02:10:01.842687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:40:52.833 [2024-10-15 02:10:01.842699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:52.833 [2024-10-15 02:10:01.842788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:52.833 [2024-10-15 02:10:01.842802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:52.833 [2024-10-15 02:10:01.842818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:40:52.833 [2024-10-15 02:10:01.842829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.093 [2024-10-15 02:10:01.842941] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:53.093 [2024-10-15 02:10:01.842964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:53.093 [2024-10-15 02:10:01.842982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:53.093 [2024-10-15 02:10:01.842994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:53.093 [2024-10-15 02:10:01.843021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:53.093 [2024-10-15 02:10:01.843052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:53.093 [2024-10-15 02:10:01.843069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:53.093 [2024-10-15 02:10:01.843096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:53.093 [2024-10-15 02:10:01.843107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:53.093 [2024-10-15 02:10:01.843122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:53.093 [2024-10-15 02:10:01.843133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:53.093 [2024-10-15 02:10:01.843148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:53.093 [2024-10-15 02:10:01.843159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:53.093 [2024-10-15 02:10:01.843202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:53.093 [2024-10-15 02:10:01.843219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:53.093 [2024-10-15 02:10:01.843246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:53.093 [2024-10-15 02:10:01.843277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:53.093 [2024-10-15 02:10:01.843288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:53.093 [2024-10-15 02:10:01.843314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:53.093 [2024-10-15 02:10:01.843330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:53.093 [2024-10-15 02:10:01.843355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:53.093 [2024-10-15 02:10:01.843366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:53.093 [2024-10-15 02:10:01.843393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:53.093 [2024-10-15 02:10:01.843421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:53.093 [2024-10-15 02:10:01.843452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:53.093 [2024-10-15 02:10:01.843463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:53.093 [2024-10-15 02:10:01.843479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:53.093 [2024-10-15 02:10:01.843491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:53.093 [2024-10-15 02:10:01.843506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:53.093 [2024-10-15 02:10:01.843516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:53.093 [2024-10-15 02:10:01.843539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:53.093 [2024-10-15 02:10:01.843551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843561] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:53.093 [2024-10-15 02:10:01.843574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:53.093 [2024-10-15 02:10:01.843585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:53.093 [2024-10-15 02:10:01.843597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:53.093 [2024-10-15 02:10:01.843609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:53.093 [2024-10-15 02:10:01.843623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:53.093 [2024-10-15 02:10:01.843634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:53.093 [2024-10-15 02:10:01.843646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:53.093 [2024-10-15 02:10:01.843656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:53.093 [2024-10-15 02:10:01.843668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:53.093 [2024-10-15 02:10:01.843680] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:53.093 [2024-10-15 02:10:01.843698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:53.093 [2024-10-15 02:10:01.843713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:53.093 [2024-10-15 02:10:01.843727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:53.093 [2024-10-15 02:10:01.843738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:53.093 [2024-10-15 02:10:01.843752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:53.093 [2024-10-15 02:10:01.843763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:53.093 [2024-10-15 02:10:01.843777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:53.093 [2024-10-15 02:10:01.843787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:53.093 [2024-10-15 02:10:01.843800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:53.093 [2024-10-15 02:10:01.843811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:53.093 [2024-10-15 02:10:01.843828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:53.094 [2024-10-15 02:10:01.843841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:53.094 [2024-10-15 02:10:01.843856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:53.094 [2024-10-15 02:10:01.843869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:53.094 [2024-10-15 02:10:01.843885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:53.094 [2024-10-15 02:10:01.843897] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:53.094 [2024-10-15 02:10:01.843918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:53.094 [2024-10-15 02:10:01.843931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:53.094 [2024-10-15 02:10:01.843947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:53.094 [2024-10-15 02:10:01.843960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:53.094 [2024-10-15 02:10:01.843976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:53.094 [2024-10-15 02:10:01.843989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:01.844006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:53.094 [2024-10-15 02:10:01.844018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:40:53.094 [2024-10-15 02:10:01.844034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:01.886156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:01.886478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:53.094 [2024-10-15 02:10:01.886618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.030 ms 00:40:53.094 [2024-10-15 02:10:01.886678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:01.887115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:01.887279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:53.094 [2024-10-15 02:10:01.887391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:40:53.094 [2024-10-15 02:10:01.887547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:01.940897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:01.941110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:53.094 [2024-10-15 02:10:01.941225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.269 ms 00:40:53.094 [2024-10-15 02:10:01.941283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:01.941660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:01.941711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:53.094 [2024-10-15 02:10:01.941728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:53.094 [2024-10-15 02:10:01.941746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:01.942551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:01.942601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:53.094 [2024-10-15 02:10:01.942633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:40:53.094 [2024-10-15 02:10:01.942653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:01.942848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:01.942873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:53.094 [2024-10-15 02:10:01.942892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:40:53.094 [2024-10-15 02:10:01.942913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:01.964643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:01.964691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:53.094 [2024-10-15 02:10:01.964708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.699 ms 00:40:53.094 [2024-10-15 02:10:01.964723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:01.979293] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:40:53.094 [2024-10-15 02:10:01.979473] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:53.094 [2024-10-15 02:10:01.979499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:01.979518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:53.094 [2024-10-15 02:10:01.979533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.629 ms 00:40:53.094 [2024-10-15 02:10:01.979550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:02.003358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:02.003535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:53.094 [2024-10-15 02:10:02.003576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.721 ms 00:40:53.094 [2024-10-15 02:10:02.003596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:02.016372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:02.016438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:53.094 [2024-10-15 02:10:02.016455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.677 ms 00:40:53.094 [2024-10-15 02:10:02.016472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:02.028834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:02.028882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:53.094 [2024-10-15 02:10:02.028898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.285 ms 00:40:53.094 [2024-10-15 02:10:02.028914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:02.029625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:02.029660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:53.094 [2024-10-15 02:10:02.029679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:40:53.094 [2024-10-15 02:10:02.029697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.094 [2024-10-15 02:10:02.102500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.094 [2024-10-15 02:10:02.102647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:53.094 [2024-10-15 02:10:02.102673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.759 ms 00:40:53.094 [2024-10-15 02:10:02.102692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.352 [2024-10-15 02:10:02.112553] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:53.352 [2024-10-15 02:10:02.135536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.352 [2024-10-15 02:10:02.135606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:53.352 [2024-10-15 02:10:02.135631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.717 ms 00:40:53.352 [2024-10-15 02:10:02.135644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.352 [2024-10-15 02:10:02.135784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.352 [2024-10-15 02:10:02.135802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:53.352 [2024-10-15 02:10:02.135824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:53.352 [2024-10-15 02:10:02.135835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.352 [2024-10-15 02:10:02.135928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.352 [2024-10-15 02:10:02.135943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:53.352 [2024-10-15 02:10:02.135959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:40:53.352 [2024-10-15 02:10:02.135970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.352 [2024-10-15 02:10:02.136008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.352 [2024-10-15 02:10:02.136026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:53.352 [2024-10-15 02:10:02.136040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:40:53.352 [2024-10-15 02:10:02.136069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.352 [2024-10-15 02:10:02.136132] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:53.353 [2024-10-15 02:10:02.136148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.353 [2024-10-15 02:10:02.136164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:53.353 [2024-10-15 02:10:02.136177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:40:53.353 [2024-10-15 02:10:02.136193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.353 [2024-10-15 02:10:02.162702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.353 [2024-10-15 02:10:02.162755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:53.353 [2024-10-15 02:10:02.162779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.476 ms 00:40:53.353 [2024-10-15 02:10:02.162796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.353 [2024-10-15 02:10:02.162930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.353 [2024-10-15 02:10:02.162958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:53.353 [2024-10-15 02:10:02.162971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:40:53.353 [2024-10-15 02:10:02.162987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.353 [2024-10-15 02:10:02.164477] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:53.353 [2024-10-15 02:10:02.167835] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 361.361 ms, result 0 00:40:53.353 [2024-10-15 02:10:02.169057] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:53.353 Some configs were skipped because the RPC state that can call them passed over. 00:40:53.353 02:10:02 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:40:53.611 [2024-10-15 02:10:02.472487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.611 [2024-10-15 02:10:02.472761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:40:53.611 [2024-10-15 02:10:02.472900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:40:53.611 [2024-10-15 02:10:02.473010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.611 [2024-10-15 02:10:02.473111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.145 ms, result 0 00:40:53.611 true 00:40:53.611 02:10:02 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:40:53.870 [2024-10-15 02:10:02.752286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:53.870 [2024-10-15 02:10:02.752517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:40:53.870 [2024-10-15 02:10:02.752657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:40:53.870 [2024-10-15 02:10:02.752711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:53.870 [2024-10-15 02:10:02.752866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.630 ms, result 0 00:40:53.870 true 00:40:53.870 02:10:02 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76575 00:40:53.870 02:10:02 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76575 ']' 00:40:53.870 02:10:02 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76575 00:40:53.870 02:10:02 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:40:53.870 02:10:02 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:53.870 02:10:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76575 00:40:53.870 killing process with pid 76575 00:40:53.870 02:10:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:53.870 02:10:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:53.870 02:10:02 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76575' 00:40:53.870 02:10:02 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76575 00:40:53.870 02:10:02 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76575 00:40:54.806 [2024-10-15 02:10:03.714996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.715075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:54.806 [2024-10-15 02:10:03.715099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:54.806 [2024-10-15 02:10:03.715112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.715145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:40:54.806 [2024-10-15 02:10:03.718564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.718601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:54.806 [2024-10-15 02:10:03.718616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.398 ms 00:40:54.806 [2024-10-15 02:10:03.718628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.718915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.718939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:54.806 [2024-10-15 02:10:03.718952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:40:54.806 [2024-10-15 02:10:03.718964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.722342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.722570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:54.806 [2024-10-15 02:10:03.722614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.358 ms 00:40:54.806 [2024-10-15 02:10:03.722629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.728463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.728500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:54.806 [2024-10-15 02:10:03.728517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.769 ms 00:40:54.806 [2024-10-15 02:10:03.728529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.738657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.738701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:54.806 [2024-10-15 02:10:03.738717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.076 ms 00:40:54.806 [2024-10-15 02:10:03.738729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.747556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.747615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:54.806 [2024-10-15 02:10:03.747641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.786 ms 00:40:54.806 [2024-10-15 02:10:03.747654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.747787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.747821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:54.806 [2024-10-15 02:10:03.747835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:40:54.806 [2024-10-15 02:10:03.747874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.758807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.758868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:54.806 [2024-10-15 02:10:03.758884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.911 ms 00:40:54.806 [2024-10-15 02:10:03.758902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.769144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.769193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:54.806 [2024-10-15 02:10:03.769209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.203 ms 00:40:54.806 [2024-10-15 02:10:03.769225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.778975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.779171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:54.806 [2024-10-15 02:10:03.779196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.711 ms 00:40:54.806 [2024-10-15 02:10:03.779214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.788992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.806 [2024-10-15 02:10:03.789036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:54.806 [2024-10-15 02:10:03.789052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.700 ms 00:40:54.806 [2024-10-15 02:10:03.789068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.806 [2024-10-15 02:10:03.789107] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:54.806 [2024-10-15 02:10:03.789131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:54.806 [2024-10-15 02:10:03.789144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:54.806 [2024-10-15 02:10:03.789157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:54.806 [2024-10-15 02:10:03.789168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:54.806 [2024-10-15 02:10:03.789183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:54.806 [2024-10-15 02:10:03.789194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:54.806 [2024-10-15 02:10:03.789207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:54.806 [2024-10-15 02:10:03.789217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:54.806 [2024-10-15 02:10:03.789230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.789996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:54.807 [2024-10-15 02:10:03.790627] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:54.808 [2024-10-15 02:10:03.790640] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1339797-0eae-46ce-abba-f9aa2d840265 00:40:54.808 [2024-10-15 02:10:03.790658] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:54.808 [2024-10-15 02:10:03.790670] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:54.808 [2024-10-15 02:10:03.790686] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:54.808 [2024-10-15 02:10:03.790711] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:54.808 [2024-10-15 02:10:03.790736] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:54.808 [2024-10-15 02:10:03.790749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:54.808 [2024-10-15 02:10:03.790765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:54.808 [2024-10-15 02:10:03.790775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:54.808 [2024-10-15 02:10:03.790792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:54.808 [2024-10-15 02:10:03.790804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.808 [2024-10-15 02:10:03.790820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:54.808 [2024-10-15 02:10:03.790834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:40:54.808 [2024-10-15 02:10:03.790866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.808 [2024-10-15 02:10:03.805258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.808 [2024-10-15 02:10:03.805478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:54.808 [2024-10-15 02:10:03.805512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.353 ms 00:40:54.808 [2024-10-15 02:10:03.805531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:54.808 [2024-10-15 02:10:03.806048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:54.808 [2024-10-15 02:10:03.806080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:54.808 [2024-10-15 02:10:03.806095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:40:54.808 [2024-10-15 02:10:03.806111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:03.852510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:03.852575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:55.067 [2024-10-15 02:10:03.852591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:03.852606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:03.852743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:03.852764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:55.067 [2024-10-15 02:10:03.852776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:03.852789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:03.852845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:03.852870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:55.067 [2024-10-15 02:10:03.852882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:03.852898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:03.852922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:03.852943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:55.067 [2024-10-15 02:10:03.852957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:03.852972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:03.941636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:03.941714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:55.067 [2024-10-15 02:10:03.941740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:03.941757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:04.013861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:04.014093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:55.067 [2024-10-15 02:10:04.014124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:04.014145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:04.014289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:04.014322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:55.067 [2024-10-15 02:10:04.014337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:04.014354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:04.014437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:04.014460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:55.067 [2024-10-15 02:10:04.014474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:04.014490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:04.014629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:04.014657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:55.067 [2024-10-15 02:10:04.014671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:04.014688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:04.014745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:04.014770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:55.067 [2024-10-15 02:10:04.014783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:04.014798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:04.014855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:04.014875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:55.067 [2024-10-15 02:10:04.014887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:04.014899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:04.014966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:55.067 [2024-10-15 02:10:04.014983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:55.067 [2024-10-15 02:10:04.014995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:55.067 [2024-10-15 02:10:04.015008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:55.067 [2024-10-15 02:10:04.015188] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 300.162 ms, result 0 00:40:55.067 [2024-10-15 02:10:04.016840] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20000d8feb60 was disconnected and freed. delete nvme_qpair. 00:40:55.067 [2024-10-15 02:10:04.018155] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001c438ca0 was disconnected and freed. delete nvme_qpair. 00:40:55.067 [2024-10-15 02:10:04.022697] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035015720 was disconnected and freed. delete nvme_qpair. 00:40:56.444 02:10:05 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:56.444 [2024-10-15 02:10:05.127804] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:40:56.444 [2024-10-15 02:10:05.127991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76640 ] 00:40:56.444 [2024-10-15 02:10:05.302308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:56.703 [2024-10-15 02:10:05.513903] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:56.961 [2024-10-15 02:10:05.851176] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:56.961 [2024-10-15 02:10:05.851279] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:57.221 [2024-10-15 02:10:05.999874] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:40:57.221 [2024-10-15 02:10:06.013560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.221 [2024-10-15 02:10:06.013604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:57.221 [2024-10-15 02:10:06.013624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:57.221 [2024-10-15 02:10:06.013636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.221 [2024-10-15 02:10:06.016812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.221 [2024-10-15 02:10:06.017007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:57.221 [2024-10-15 02:10:06.017033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.150 ms 00:40:57.221 [2024-10-15 02:10:06.017053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.221 [2024-10-15 02:10:06.017229] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:57.221 [2024-10-15 02:10:06.018070] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:57.221 [2024-10-15 02:10:06.018111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.221 [2024-10-15 02:10:06.018160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:57.221 [2024-10-15 02:10:06.018173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:40:57.221 [2024-10-15 02:10:06.018183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.221 [2024-10-15 02:10:06.020582] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:57.221 [2024-10-15 02:10:06.035192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.221 [2024-10-15 02:10:06.035378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:57.221 [2024-10-15 02:10:06.035406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.612 ms 00:40:57.221 [2024-10-15 02:10:06.035435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.222 [2024-10-15 02:10:06.035553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.222 [2024-10-15 02:10:06.035577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:57.222 [2024-10-15 02:10:06.035590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:40:57.222 [2024-10-15 02:10:06.035609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.222 [2024-10-15 02:10:06.047175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.222 [2024-10-15 02:10:06.047216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:57.222 [2024-10-15 02:10:06.047231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.514 ms 00:40:57.222 [2024-10-15 02:10:06.047241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.222 [2024-10-15 02:10:06.047380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.222 [2024-10-15 02:10:06.047400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:57.222 [2024-10-15 02:10:06.047434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:40:57.222 [2024-10-15 02:10:06.047445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.222 [2024-10-15 02:10:06.047483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.222 [2024-10-15 02:10:06.047497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:57.222 [2024-10-15 02:10:06.047510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:57.222 [2024-10-15 02:10:06.047520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.222 [2024-10-15 02:10:06.047551] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:57.222 [2024-10-15 02:10:06.052298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.222 [2024-10-15 02:10:06.052332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:57.222 [2024-10-15 02:10:06.052346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.756 ms 00:40:57.222 [2024-10-15 02:10:06.052357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.222 [2024-10-15 02:10:06.052428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.222 [2024-10-15 02:10:06.052445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:57.222 [2024-10-15 02:10:06.052457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:57.222 [2024-10-15 02:10:06.052468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.222 [2024-10-15 02:10:06.052498] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:57.222 [2024-10-15 02:10:06.052527] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:57.222 [2024-10-15 02:10:06.052566] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:57.222 [2024-10-15 02:10:06.052588] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:57.222 [2024-10-15 02:10:06.052684] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:57.222 [2024-10-15 02:10:06.052698] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:57.222 [2024-10-15 02:10:06.052712] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:57.222 [2024-10-15 02:10:06.052725] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:57.222 [2024-10-15 02:10:06.052737] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:57.222 [2024-10-15 02:10:06.052749] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:57.222 [2024-10-15 02:10:06.052771] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:57.222 [2024-10-15 02:10:06.052782] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:57.222 [2024-10-15 02:10:06.052792] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:57.222 [2024-10-15 02:10:06.052803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.222 [2024-10-15 02:10:06.052819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:57.222 [2024-10-15 02:10:06.052830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:40:57.222 [2024-10-15 02:10:06.052840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.222 [2024-10-15 02:10:06.052922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.222 [2024-10-15 02:10:06.052936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:57.222 [2024-10-15 02:10:06.052947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:40:57.222 [2024-10-15 02:10:06.052957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.222 [2024-10-15 02:10:06.053053] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:57.222 [2024-10-15 02:10:06.053068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:57.222 [2024-10-15 02:10:06.053084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:57.222 [2024-10-15 02:10:06.053095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:57.222 [2024-10-15 02:10:06.053115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:57.222 [2024-10-15 02:10:06.053136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:57.222 [2024-10-15 02:10:06.053145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:57.222 [2024-10-15 02:10:06.053177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:57.222 [2024-10-15 02:10:06.053186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:57.222 [2024-10-15 02:10:06.053198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:57.222 [2024-10-15 02:10:06.053208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:57.222 [2024-10-15 02:10:06.053218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:57.222 [2024-10-15 02:10:06.053227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:57.222 [2024-10-15 02:10:06.053247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:57.222 [2024-10-15 02:10:06.053256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:57.222 [2024-10-15 02:10:06.053276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:57.222 [2024-10-15 02:10:06.053296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:57.222 [2024-10-15 02:10:06.053305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:57.222 [2024-10-15 02:10:06.053324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:57.222 [2024-10-15 02:10:06.053334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:57.222 [2024-10-15 02:10:06.053353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:57.222 [2024-10-15 02:10:06.053362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:57.222 [2024-10-15 02:10:06.053381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:57.222 [2024-10-15 02:10:06.053391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:57.222 [2024-10-15 02:10:06.053424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:57.222 [2024-10-15 02:10:06.053435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:57.222 [2024-10-15 02:10:06.053446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:57.222 [2024-10-15 02:10:06.053455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:57.222 [2024-10-15 02:10:06.053465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:57.222 [2024-10-15 02:10:06.053474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:57.222 [2024-10-15 02:10:06.053494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:57.222 [2024-10-15 02:10:06.053505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053515] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:57.222 [2024-10-15 02:10:06.053528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:57.222 [2024-10-15 02:10:06.053539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:57.222 [2024-10-15 02:10:06.053549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.222 [2024-10-15 02:10:06.053561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:57.222 [2024-10-15 02:10:06.053570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:57.222 [2024-10-15 02:10:06.053580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:57.222 [2024-10-15 02:10:06.053591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:57.222 [2024-10-15 02:10:06.053600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:57.222 [2024-10-15 02:10:06.053610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:57.222 [2024-10-15 02:10:06.053622] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:57.222 [2024-10-15 02:10:06.053640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:57.222 [2024-10-15 02:10:06.053651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:57.222 [2024-10-15 02:10:06.053662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:57.222 [2024-10-15 02:10:06.053672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:57.222 [2024-10-15 02:10:06.053683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:57.222 [2024-10-15 02:10:06.053709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:57.222 [2024-10-15 02:10:06.053720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:57.222 [2024-10-15 02:10:06.053730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:57.223 [2024-10-15 02:10:06.053741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:57.223 [2024-10-15 02:10:06.053752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:57.223 [2024-10-15 02:10:06.053763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:57.223 [2024-10-15 02:10:06.053773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:57.223 [2024-10-15 02:10:06.053784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:57.223 [2024-10-15 02:10:06.053795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:57.223 [2024-10-15 02:10:06.053806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:57.223 [2024-10-15 02:10:06.053817] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:57.223 [2024-10-15 02:10:06.053829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:57.223 [2024-10-15 02:10:06.053846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:57.223 [2024-10-15 02:10:06.053857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:57.223 [2024-10-15 02:10:06.053868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:57.223 [2024-10-15 02:10:06.053880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:57.223 [2024-10-15 02:10:06.053891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.223 [2024-10-15 02:10:06.053906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:57.223 [2024-10-15 02:10:06.053918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:40:57.223 [2024-10-15 02:10:06.053930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.223 [2024-10-15 02:10:06.102273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.223 [2024-10-15 02:10:06.102555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:57.223 [2024-10-15 02:10:06.102688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.266 ms 00:40:57.223 [2024-10-15 02:10:06.102737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.223 [2024-10-15 02:10:06.103075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.223 [2024-10-15 02:10:06.103250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:57.223 [2024-10-15 02:10:06.103355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:40:57.223 [2024-10-15 02:10:06.103491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.223 [2024-10-15 02:10:06.144877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.223 [2024-10-15 02:10:06.145077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:57.223 [2024-10-15 02:10:06.145185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.309 ms 00:40:57.223 [2024-10-15 02:10:06.145233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.223 [2024-10-15 02:10:06.145430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.223 [2024-10-15 02:10:06.145500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:57.223 [2024-10-15 02:10:06.145539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:57.223 [2024-10-15 02:10:06.145641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.223 [2024-10-15 02:10:06.146505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.223 [2024-10-15 02:10:06.146641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:57.223 [2024-10-15 02:10:06.146743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:40:57.223 [2024-10-15 02:10:06.146788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.223 [2024-10-15 02:10:06.146997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.223 [2024-10-15 02:10:06.147068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:57.223 [2024-10-15 02:10:06.147222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:40:57.223 [2024-10-15 02:10:06.147364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.223 [2024-10-15 02:10:06.165903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.223 [2024-10-15 02:10:06.166055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:57.223 [2024-10-15 02:10:06.166187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.440 ms 00:40:57.223 [2024-10-15 02:10:06.166216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.223 [2024-10-15 02:10:06.180596] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:40:57.223 [2024-10-15 02:10:06.180751] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:57.223 [2024-10-15 02:10:06.180775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.223 [2024-10-15 02:10:06.180788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:57.223 [2024-10-15 02:10:06.180800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.409 ms 00:40:57.223 [2024-10-15 02:10:06.180811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.223 [2024-10-15 02:10:06.205137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.223 [2024-10-15 02:10:06.205177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:57.223 [2024-10-15 02:10:06.205202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.221 ms 00:40:57.223 [2024-10-15 02:10:06.205212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.223 [2024-10-15 02:10:06.218170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.223 [2024-10-15 02:10:06.218227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:57.223 [2024-10-15 02:10:06.218244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.863 ms 00:40:57.223 [2024-10-15 02:10:06.218255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.482 [2024-10-15 02:10:06.231453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.482 [2024-10-15 02:10:06.231502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:57.482 [2024-10-15 02:10:06.231534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.112 ms 00:40:57.482 [2024-10-15 02:10:06.231545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.482 [2024-10-15 02:10:06.232448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.482 [2024-10-15 02:10:06.232481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:57.482 [2024-10-15 02:10:06.232497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:40:57.482 [2024-10-15 02:10:06.232509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.482 [2024-10-15 02:10:06.310660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.482 [2024-10-15 02:10:06.310757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:57.482 [2024-10-15 02:10:06.310780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.107 ms 00:40:57.482 [2024-10-15 02:10:06.310800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.482 [2024-10-15 02:10:06.322397] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:57.482 [2024-10-15 02:10:06.350990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.482 [2024-10-15 02:10:06.351080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:57.482 [2024-10-15 02:10:06.351121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.986 ms 00:40:57.482 [2024-10-15 02:10:06.351134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.482 [2024-10-15 02:10:06.351402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.482 [2024-10-15 02:10:06.351422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:57.482 [2024-10-15 02:10:06.351435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:40:57.482 [2024-10-15 02:10:06.351463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.482 [2024-10-15 02:10:06.351583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.482 [2024-10-15 02:10:06.351602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:57.482 [2024-10-15 02:10:06.351616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:40:57.482 [2024-10-15 02:10:06.351627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.482 [2024-10-15 02:10:06.351665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.482 [2024-10-15 02:10:06.351680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:57.482 [2024-10-15 02:10:06.351693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:57.482 [2024-10-15 02:10:06.351704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.482 [2024-10-15 02:10:06.351758] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:57.482 [2024-10-15 02:10:06.351776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.482 [2024-10-15 02:10:06.351793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:57.482 [2024-10-15 02:10:06.351821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:40:57.482 [2024-10-15 02:10:06.351832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.482 [2024-10-15 02:10:06.385088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.482 [2024-10-15 02:10:06.385138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:57.482 [2024-10-15 02:10:06.385157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.220 ms 00:40:57.482 [2024-10-15 02:10:06.385170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.482 [2024-10-15 02:10:06.385317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.482 [2024-10-15 02:10:06.385352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:57.482 [2024-10-15 02:10:06.385366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:40:57.482 [2024-10-15 02:10:06.385379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.482 [2024-10-15 02:10:06.386924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:57.482 [2024-10-15 02:10:06.390892] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.859 ms, result 0 00:40:57.482 [2024-10-15 02:10:06.391759] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:57.482 [2024-10-15 02:10:06.406143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:58.859  [2024-10-15T02:10:08.814Z] Copying: 28/256 [MB] (28 MBps) [2024-10-15T02:10:09.749Z] Copying: 54/256 [MB] (25 MBps) [2024-10-15T02:10:10.684Z] Copying: 81/256 [MB] (26 MBps) [2024-10-15T02:10:11.670Z] Copying: 107/256 [MB] (26 MBps) [2024-10-15T02:10:12.606Z] Copying: 133/256 [MB] (25 MBps) [2024-10-15T02:10:13.542Z] Copying: 159/256 [MB] (26 MBps) [2024-10-15T02:10:14.477Z] Copying: 185/256 [MB] (25 MBps) [2024-10-15T02:10:15.852Z] Copying: 210/256 [MB] (25 MBps) [2024-10-15T02:10:16.419Z] Copying: 237/256 [MB] (26 MBps) [2024-10-15T02:10:16.679Z] Copying: 256/256 [MB] (average 26 MBps)[2024-10-15 02:10:16.459612] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:07.667 [2024-10-15 02:10:16.474086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.474298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:07.667 [2024-10-15 02:10:16.474334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:41:07.667 [2024-10-15 02:10:16.474350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.474394] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:41:07.667 [2024-10-15 02:10:16.478587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.478626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:07.667 [2024-10-15 02:10:16.478643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.168 ms 00:41:07.667 [2024-10-15 02:10:16.478654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.478944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.478968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:07.667 [2024-10-15 02:10:16.478981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:41:07.667 [2024-10-15 02:10:16.478993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.481990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.482021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:07.667 [2024-10-15 02:10:16.482035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:41:07.667 [2024-10-15 02:10:16.482047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.487903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.487950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:07.667 [2024-10-15 02:10:16.487973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.830 ms 00:41:07.667 [2024-10-15 02:10:16.487984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.512587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.512628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:07.667 [2024-10-15 02:10:16.512645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.511 ms 00:41:07.667 [2024-10-15 02:10:16.512656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.528481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.528519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:07.667 [2024-10-15 02:10:16.528535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.781 ms 00:41:07.667 [2024-10-15 02:10:16.528546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.528687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.528706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:07.667 [2024-10-15 02:10:16.528717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:41:07.667 [2024-10-15 02:10:16.528735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.553518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.553692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:07.667 [2024-10-15 02:10:16.553718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.761 ms 00:41:07.667 [2024-10-15 02:10:16.553729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.578193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.578229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:07.667 [2024-10-15 02:10:16.578244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.418 ms 00:41:07.667 [2024-10-15 02:10:16.578253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.601938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.601974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:07.667 [2024-10-15 02:10:16.601989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.644 ms 00:41:07.667 [2024-10-15 02:10:16.601999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.625679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.667 [2024-10-15 02:10:16.625716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:07.667 [2024-10-15 02:10:16.625730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.604 ms 00:41:07.667 [2024-10-15 02:10:16.625739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.667 [2024-10-15 02:10:16.625779] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:07.667 [2024-10-15 02:10:16.625801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.625991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.626001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.626012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.626022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.626033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:07.667 [2024-10-15 02:10:16.626043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:07.668 [2024-10-15 02:10:16.626971] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:07.668 [2024-10-15 02:10:16.626982] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1339797-0eae-46ce-abba-f9aa2d840265 00:41:07.668 [2024-10-15 02:10:16.626993] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:07.668 [2024-10-15 02:10:16.627003] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:07.668 [2024-10-15 02:10:16.627020] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:07.668 [2024-10-15 02:10:16.627031] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:07.668 [2024-10-15 02:10:16.627041] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:07.668 [2024-10-15 02:10:16.627052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:07.668 [2024-10-15 02:10:16.627063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:07.668 [2024-10-15 02:10:16.627073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:07.668 [2024-10-15 02:10:16.627083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:07.668 [2024-10-15 02:10:16.627093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.668 [2024-10-15 02:10:16.627104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:07.668 [2024-10-15 02:10:16.627116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:41:07.668 [2024-10-15 02:10:16.627127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.668 [2024-10-15 02:10:16.641663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.668 [2024-10-15 02:10:16.641698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:07.668 [2024-10-15 02:10:16.641729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.495 ms 00:41:07.668 [2024-10-15 02:10:16.641740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.668 [2024-10-15 02:10:16.642231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.668 [2024-10-15 02:10:16.642259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:07.668 [2024-10-15 02:10:16.642273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:41:07.668 [2024-10-15 02:10:16.642284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.927 [2024-10-15 02:10:16.679600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.927 [2024-10-15 02:10:16.679645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:07.927 [2024-10-15 02:10:16.679661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.927 [2024-10-15 02:10:16.679673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.927 [2024-10-15 02:10:16.679791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.927 [2024-10-15 02:10:16.679809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:07.927 [2024-10-15 02:10:16.679821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.927 [2024-10-15 02:10:16.679831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.927 [2024-10-15 02:10:16.679889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.927 [2024-10-15 02:10:16.679914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:07.927 [2024-10-15 02:10:16.679926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.927 [2024-10-15 02:10:16.679936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.927 [2024-10-15 02:10:16.679960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.927 [2024-10-15 02:10:16.679973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:07.927 [2024-10-15 02:10:16.679985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.927 [2024-10-15 02:10:16.679996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.927 [2024-10-15 02:10:16.771487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.927 [2024-10-15 02:10:16.771561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:07.927 [2024-10-15 02:10:16.771596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.927 [2024-10-15 02:10:16.771608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.927 [2024-10-15 02:10:16.848199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.927 [2024-10-15 02:10:16.848270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:07.927 [2024-10-15 02:10:16.848306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.927 [2024-10-15 02:10:16.848319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.927 [2024-10-15 02:10:16.848471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.927 [2024-10-15 02:10:16.848490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:07.927 [2024-10-15 02:10:16.848510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.927 [2024-10-15 02:10:16.848522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.927 [2024-10-15 02:10:16.848561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.927 [2024-10-15 02:10:16.848575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:07.927 [2024-10-15 02:10:16.848587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.927 [2024-10-15 02:10:16.848599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.927 [2024-10-15 02:10:16.848729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.928 [2024-10-15 02:10:16.848749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:07.928 [2024-10-15 02:10:16.848768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.928 [2024-10-15 02:10:16.848779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.928 [2024-10-15 02:10:16.848845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.928 [2024-10-15 02:10:16.848863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:07.928 [2024-10-15 02:10:16.848877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.928 [2024-10-15 02:10:16.848889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.928 [2024-10-15 02:10:16.848946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.928 [2024-10-15 02:10:16.848970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:07.928 [2024-10-15 02:10:16.848983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.928 [2024-10-15 02:10:16.849001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.928 [2024-10-15 02:10:16.849063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.928 [2024-10-15 02:10:16.849080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:07.928 [2024-10-15 02:10:16.849094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.928 [2024-10-15 02:10:16.849105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.928 [2024-10-15 02:10:16.849323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.219 ms, result 0 00:41:07.928 [2024-10-15 02:10:16.851139] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x200019efcce0 was disconnected and freed. delete nvme_qpair. 00:41:07.928 [2024-10-15 02:10:16.852413] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001992aca0 was disconnected and freed. delete nvme_qpair. 00:41:07.928 [2024-10-15 02:10:16.856834] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:41:08.862 00:41:08.862 00:41:09.121 02:10:17 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:41:09.687 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:41:09.687 02:10:18 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:41:09.688 02:10:18 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:41:09.688 02:10:18 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:41:09.688 02:10:18 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:09.688 02:10:18 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:41:09.688 02:10:18 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:41:09.688 Process with pid 76575 is not found 00:41:09.688 02:10:18 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76575 00:41:09.688 02:10:18 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76575 ']' 00:41:09.688 02:10:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76575 00:41:09.688 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76575) - No such process 00:41:09.688 02:10:18 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 76575 is not found' 00:41:09.688 ************************************ 00:41:09.688 END TEST ftl_trim 00:41:09.688 ************************************ 00:41:09.688 00:41:09.688 real 1m10.004s 00:41:09.688 user 1m37.018s 00:41:09.688 sys 0m7.938s 00:41:09.688 02:10:18 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:09.688 02:10:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:41:09.688 02:10:18 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:41:09.688 02:10:18 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:41:09.688 02:10:18 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:09.688 02:10:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:41:09.688 ************************************ 00:41:09.688 START TEST ftl_restore 00:41:09.688 ************************************ 00:41:09.688 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:41:09.688 * Looking for test storage... 00:41:09.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:41:09.688 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:09.688 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:41:09.688 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:09.947 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:41:09.947 02:10:18 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:41:09.948 02:10:18 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:41:09.948 02:10:18 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:41:09.948 02:10:18 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:09.948 02:10:18 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:41:09.948 02:10:18 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:41:09.948 02:10:18 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:09.948 02:10:18 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:09.948 02:10:18 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:41:09.948 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:09.948 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:09.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.948 --rc genhtml_branch_coverage=1 00:41:09.948 --rc genhtml_function_coverage=1 00:41:09.948 --rc genhtml_legend=1 00:41:09.948 --rc geninfo_all_blocks=1 00:41:09.948 --rc geninfo_unexecuted_blocks=1 00:41:09.948 00:41:09.948 ' 00:41:09.948 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:09.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.948 --rc genhtml_branch_coverage=1 00:41:09.948 --rc genhtml_function_coverage=1 00:41:09.948 --rc genhtml_legend=1 00:41:09.948 --rc geninfo_all_blocks=1 00:41:09.948 --rc geninfo_unexecuted_blocks=1 00:41:09.948 00:41:09.948 ' 00:41:09.948 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:09.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.948 --rc genhtml_branch_coverage=1 00:41:09.948 --rc genhtml_function_coverage=1 00:41:09.948 --rc genhtml_legend=1 00:41:09.948 --rc geninfo_all_blocks=1 00:41:09.948 --rc geninfo_unexecuted_blocks=1 00:41:09.948 00:41:09.948 ' 00:41:09.948 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:09.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.948 --rc genhtml_branch_coverage=1 00:41:09.948 --rc genhtml_function_coverage=1 00:41:09.948 --rc genhtml_legend=1 00:41:09.948 --rc geninfo_all_blocks=1 00:41:09.948 --rc geninfo_unexecuted_blocks=1 00:41:09.948 00:41:09.948 ' 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.zSuNWySpMl 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76839 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76839 00:41:09.948 02:10:18 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:09.948 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 76839 ']' 00:41:09.948 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:09.948 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:09.948 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:09.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:09.948 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:09.948 02:10:18 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:41:09.948 [2024-10-15 02:10:18.915844] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:41:09.948 [2024-10-15 02:10:18.916030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76839 ] 00:41:10.207 [2024-10-15 02:10:19.090054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:10.465 [2024-10-15 02:10:19.309489] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:41:11.400 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:11.400 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:41:11.400 02:10:20 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:41:11.400 02:10:20 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:41:11.400 02:10:20 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:41:11.400 02:10:20 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:41:11.400 02:10:20 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:41:11.400 02:10:20 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:41:11.658 02:10:20 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:41:11.658 02:10:20 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:41:11.658 02:10:20 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:41:11.658 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:41:11.658 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:41:11.658 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:41:11.658 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:41:11.658 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:41:11.917 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:41:11.917 { 00:41:11.917 "name": "nvme0n1", 00:41:11.917 "aliases": [ 00:41:11.917 "af5fa4df-65de-4ab3-89bf-67d3455feeb0" 00:41:11.917 ], 00:41:11.917 "product_name": "NVMe disk", 00:41:11.917 "block_size": 4096, 00:41:11.917 "num_blocks": 1310720, 00:41:11.917 "uuid": "af5fa4df-65de-4ab3-89bf-67d3455feeb0", 00:41:11.917 "numa_id": -1, 00:41:11.917 "assigned_rate_limits": { 00:41:11.917 "rw_ios_per_sec": 0, 00:41:11.917 "rw_mbytes_per_sec": 0, 00:41:11.917 "r_mbytes_per_sec": 0, 00:41:11.917 "w_mbytes_per_sec": 0 00:41:11.917 }, 00:41:11.917 "claimed": true, 00:41:11.917 "claim_type": "read_many_write_one", 00:41:11.917 "zoned": false, 00:41:11.917 "supported_io_types": { 00:41:11.917 "read": true, 00:41:11.917 "write": true, 00:41:11.917 "unmap": true, 00:41:11.917 "flush": true, 00:41:11.917 "reset": true, 00:41:11.917 "nvme_admin": true, 00:41:11.917 "nvme_io": true, 00:41:11.917 "nvme_io_md": false, 00:41:11.917 "write_zeroes": true, 00:41:11.917 "zcopy": false, 00:41:11.917 "get_zone_info": false, 00:41:11.917 "zone_management": false, 00:41:11.917 "zone_append": false, 00:41:11.917 "compare": true, 00:41:11.917 "compare_and_write": false, 00:41:11.917 "abort": true, 00:41:11.917 "seek_hole": false, 00:41:11.917 "seek_data": false, 00:41:11.917 "copy": true, 00:41:11.917 "nvme_iov_md": false 00:41:11.917 }, 00:41:11.917 "driver_specific": { 00:41:11.917 "nvme": [ 00:41:11.917 { 00:41:11.917 "pci_address": "0000:00:11.0", 00:41:11.917 "trid": { 00:41:11.917 "trtype": "PCIe", 00:41:11.917 "traddr": "0000:00:11.0" 00:41:11.917 }, 00:41:11.917 "ctrlr_data": { 00:41:11.917 "cntlid": 0, 00:41:11.917 "vendor_id": "0x1b36", 00:41:11.917 "model_number": "QEMU NVMe Ctrl", 00:41:11.917 "serial_number": "12341", 00:41:11.917 "firmware_revision": "8.0.0", 00:41:11.917 "subnqn": "nqn.2019-08.org.qemu:12341", 00:41:11.917 "oacs": { 00:41:11.917 "security": 0, 00:41:11.917 "format": 1, 00:41:11.917 "firmware": 0, 00:41:11.917 "ns_manage": 1 00:41:11.917 }, 00:41:11.917 "multi_ctrlr": false, 00:41:11.917 "ana_reporting": false 00:41:11.917 }, 00:41:11.917 "vs": { 00:41:11.917 "nvme_version": "1.4" 00:41:11.917 }, 00:41:11.917 "ns_data": { 00:41:11.917 "id": 1, 00:41:11.917 "can_share": false 00:41:11.917 } 00:41:11.917 } 00:41:11.917 ], 00:41:11.917 "mp_policy": "active_passive" 00:41:11.917 } 00:41:11.917 } 00:41:11.917 ]' 00:41:11.917 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:41:11.917 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:41:11.917 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:41:11.917 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:41:11.917 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:41:11.917 02:10:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:41:11.917 02:10:20 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:41:11.917 02:10:20 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:41:11.917 02:10:20 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:41:11.917 02:10:20 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:41:11.917 02:10:20 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:41:12.176 02:10:21 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=7af46add-c4f2-484c-bda1-34b6ab418b10 00:41:12.176 02:10:21 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:41:12.176 02:10:21 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7af46add-c4f2-484c-bda1-34b6ab418b10 00:41:12.434 [2024-10-15 02:10:21.252636] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035015720 was disconnected and freed. delete nvme_qpair. 00:41:12.434 02:10:21 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:41:12.693 02:10:21 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=3499ea23-231c-4f33-9c41-cbb45eda0bea 00:41:12.693 02:10:21 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3499ea23-231c-4f33-9c41-cbb45eda0bea 00:41:12.951 02:10:21 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:12.951 02:10:21 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:41:12.951 02:10:21 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:12.951 02:10:21 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:41:12.951 02:10:21 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:41:12.951 02:10:21 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:12.951 02:10:21 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:41:12.951 02:10:21 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:12.951 02:10:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:12.951 02:10:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:41:12.951 02:10:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:41:12.951 02:10:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:41:12.951 02:10:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:13.210 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:41:13.210 { 00:41:13.210 "name": "becf7cf0-44a3-4d13-bb85-80525aab801c", 00:41:13.210 "aliases": [ 00:41:13.210 "lvs/nvme0n1p0" 00:41:13.210 ], 00:41:13.210 "product_name": "Logical Volume", 00:41:13.210 "block_size": 4096, 00:41:13.210 "num_blocks": 26476544, 00:41:13.210 "uuid": "becf7cf0-44a3-4d13-bb85-80525aab801c", 00:41:13.210 "assigned_rate_limits": { 00:41:13.210 "rw_ios_per_sec": 0, 00:41:13.210 "rw_mbytes_per_sec": 0, 00:41:13.210 "r_mbytes_per_sec": 0, 00:41:13.210 "w_mbytes_per_sec": 0 00:41:13.210 }, 00:41:13.210 "claimed": false, 00:41:13.210 "zoned": false, 00:41:13.210 "supported_io_types": { 00:41:13.210 "read": true, 00:41:13.210 "write": true, 00:41:13.210 "unmap": true, 00:41:13.210 "flush": false, 00:41:13.210 "reset": true, 00:41:13.210 "nvme_admin": false, 00:41:13.210 "nvme_io": false, 00:41:13.210 "nvme_io_md": false, 00:41:13.210 "write_zeroes": true, 00:41:13.210 "zcopy": false, 00:41:13.210 "get_zone_info": false, 00:41:13.210 "zone_management": false, 00:41:13.210 "zone_append": false, 00:41:13.210 "compare": false, 00:41:13.210 "compare_and_write": false, 00:41:13.210 "abort": false, 00:41:13.210 "seek_hole": true, 00:41:13.210 "seek_data": true, 00:41:13.210 "copy": false, 00:41:13.210 "nvme_iov_md": false 00:41:13.210 }, 00:41:13.210 "driver_specific": { 00:41:13.210 "lvol": { 00:41:13.210 "lvol_store_uuid": "3499ea23-231c-4f33-9c41-cbb45eda0bea", 00:41:13.210 "base_bdev": "nvme0n1", 00:41:13.210 "thin_provision": true, 00:41:13.210 "num_allocated_clusters": 0, 00:41:13.210 "snapshot": false, 00:41:13.210 "clone": false, 00:41:13.210 "esnap_clone": false 00:41:13.210 } 00:41:13.210 } 00:41:13.210 } 00:41:13.210 ]' 00:41:13.210 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:41:13.210 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:41:13.210 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:41:13.210 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:41:13.210 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:41:13.210 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:41:13.210 02:10:22 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:41:13.210 02:10:22 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:41:13.210 02:10:22 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:41:13.469 [2024-10-15 02:10:22.349449] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:41:13.469 02:10:22 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:41:13.469 02:10:22 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:41:13.469 02:10:22 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:13.469 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:13.469 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:41:13.469 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:41:13.469 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:41:13.469 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:13.728 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:41:13.728 { 00:41:13.728 "name": "becf7cf0-44a3-4d13-bb85-80525aab801c", 00:41:13.728 "aliases": [ 00:41:13.728 "lvs/nvme0n1p0" 00:41:13.728 ], 00:41:13.728 "product_name": "Logical Volume", 00:41:13.728 "block_size": 4096, 00:41:13.728 "num_blocks": 26476544, 00:41:13.728 "uuid": "becf7cf0-44a3-4d13-bb85-80525aab801c", 00:41:13.728 "assigned_rate_limits": { 00:41:13.728 "rw_ios_per_sec": 0, 00:41:13.728 "rw_mbytes_per_sec": 0, 00:41:13.728 "r_mbytes_per_sec": 0, 00:41:13.728 "w_mbytes_per_sec": 0 00:41:13.728 }, 00:41:13.728 "claimed": false, 00:41:13.728 "zoned": false, 00:41:13.728 "supported_io_types": { 00:41:13.728 "read": true, 00:41:13.728 "write": true, 00:41:13.728 "unmap": true, 00:41:13.728 "flush": false, 00:41:13.728 "reset": true, 00:41:13.728 "nvme_admin": false, 00:41:13.728 "nvme_io": false, 00:41:13.728 "nvme_io_md": false, 00:41:13.728 "write_zeroes": true, 00:41:13.728 "zcopy": false, 00:41:13.728 "get_zone_info": false, 00:41:13.728 "zone_management": false, 00:41:13.728 "zone_append": false, 00:41:13.728 "compare": false, 00:41:13.729 "compare_and_write": false, 00:41:13.729 "abort": false, 00:41:13.729 "seek_hole": true, 00:41:13.729 "seek_data": true, 00:41:13.729 "copy": false, 00:41:13.729 "nvme_iov_md": false 00:41:13.729 }, 00:41:13.729 "driver_specific": { 00:41:13.729 "lvol": { 00:41:13.729 "lvol_store_uuid": "3499ea23-231c-4f33-9c41-cbb45eda0bea", 00:41:13.729 "base_bdev": "nvme0n1", 00:41:13.729 "thin_provision": true, 00:41:13.729 "num_allocated_clusters": 0, 00:41:13.729 "snapshot": false, 00:41:13.729 "clone": false, 00:41:13.729 "esnap_clone": false 00:41:13.729 } 00:41:13.729 } 00:41:13.729 } 00:41:13.729 ]' 00:41:13.729 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:41:13.729 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:41:13.729 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:41:13.729 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:41:13.729 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:41:13.729 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:41:13.729 02:10:22 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:41:13.729 02:10:22 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:41:13.989 [2024-10-15 02:10:22.888612] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:41:13.989 02:10:22 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:41:13.989 02:10:22 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:13.989 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:13.989 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:41:13.989 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:41:13.989 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:41:13.989 02:10:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b becf7cf0-44a3-4d13-bb85-80525aab801c 00:41:14.249 02:10:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:41:14.249 { 00:41:14.249 "name": "becf7cf0-44a3-4d13-bb85-80525aab801c", 00:41:14.249 "aliases": [ 00:41:14.249 "lvs/nvme0n1p0" 00:41:14.249 ], 00:41:14.249 "product_name": "Logical Volume", 00:41:14.249 "block_size": 4096, 00:41:14.249 "num_blocks": 26476544, 00:41:14.249 "uuid": "becf7cf0-44a3-4d13-bb85-80525aab801c", 00:41:14.249 "assigned_rate_limits": { 00:41:14.249 "rw_ios_per_sec": 0, 00:41:14.249 "rw_mbytes_per_sec": 0, 00:41:14.249 "r_mbytes_per_sec": 0, 00:41:14.249 "w_mbytes_per_sec": 0 00:41:14.249 }, 00:41:14.249 "claimed": false, 00:41:14.249 "zoned": false, 00:41:14.249 "supported_io_types": { 00:41:14.249 "read": true, 00:41:14.249 "write": true, 00:41:14.249 "unmap": true, 00:41:14.249 "flush": false, 00:41:14.249 "reset": true, 00:41:14.249 "nvme_admin": false, 00:41:14.249 "nvme_io": false, 00:41:14.249 "nvme_io_md": false, 00:41:14.249 "write_zeroes": true, 00:41:14.249 "zcopy": false, 00:41:14.249 "get_zone_info": false, 00:41:14.249 "zone_management": false, 00:41:14.249 "zone_append": false, 00:41:14.249 "compare": false, 00:41:14.249 "compare_and_write": false, 00:41:14.249 "abort": false, 00:41:14.249 "seek_hole": true, 00:41:14.249 "seek_data": true, 00:41:14.249 "copy": false, 00:41:14.249 "nvme_iov_md": false 00:41:14.249 }, 00:41:14.249 "driver_specific": { 00:41:14.249 "lvol": { 00:41:14.249 "lvol_store_uuid": "3499ea23-231c-4f33-9c41-cbb45eda0bea", 00:41:14.249 "base_bdev": "nvme0n1", 00:41:14.249 "thin_provision": true, 00:41:14.249 "num_allocated_clusters": 0, 00:41:14.249 "snapshot": false, 00:41:14.249 "clone": false, 00:41:14.249 "esnap_clone": false 00:41:14.249 } 00:41:14.249 } 00:41:14.249 } 00:41:14.249 ]' 00:41:14.249 02:10:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:41:14.249 02:10:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:41:14.249 02:10:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:41:14.249 02:10:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:41:14.249 02:10:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:41:14.250 02:10:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:41:14.250 02:10:23 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:41:14.250 02:10:23 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d becf7cf0-44a3-4d13-bb85-80525aab801c --l2p_dram_limit 10' 00:41:14.250 02:10:23 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:41:14.250 02:10:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:41:14.250 02:10:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:41:14.250 02:10:23 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:41:14.250 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:41:14.250 02:10:23 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d becf7cf0-44a3-4d13-bb85-80525aab801c --l2p_dram_limit 10 -c nvc0n1p0 00:41:14.509 [2024-10-15 02:10:23.382858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.509 [2024-10-15 02:10:23.382911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:14.509 [2024-10-15 02:10:23.382932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:41:14.509 [2024-10-15 02:10:23.382963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.509 [2024-10-15 02:10:23.383023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.509 [2024-10-15 02:10:23.383043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:14.509 [2024-10-15 02:10:23.383056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:41:14.509 [2024-10-15 02:10:23.383072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.509 [2024-10-15 02:10:23.383109] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:14.509 [2024-10-15 02:10:23.383965] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:14.509 [2024-10-15 02:10:23.383998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.509 [2024-10-15 02:10:23.384016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:14.509 [2024-10-15 02:10:23.384032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:41:14.509 [2024-10-15 02:10:23.384046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.509 [2024-10-15 02:10:23.384181] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3ef74bb9-2e50-4b4e-aca6-8d1079fe565a 00:41:14.509 [2024-10-15 02:10:23.386485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.509 [2024-10-15 02:10:23.386523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:41:14.509 [2024-10-15 02:10:23.386569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:41:14.509 [2024-10-15 02:10:23.386582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.509 [2024-10-15 02:10:23.399615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.509 [2024-10-15 02:10:23.399663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:14.509 [2024-10-15 02:10:23.399683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.952 ms 00:41:14.509 [2024-10-15 02:10:23.399695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.509 [2024-10-15 02:10:23.399837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.509 [2024-10-15 02:10:23.399856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:14.509 [2024-10-15 02:10:23.399872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:41:14.509 [2024-10-15 02:10:23.399887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.509 [2024-10-15 02:10:23.399964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.509 [2024-10-15 02:10:23.399981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:14.509 [2024-10-15 02:10:23.399995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:41:14.509 [2024-10-15 02:10:23.400006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.509 [2024-10-15 02:10:23.400041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:14.509 [2024-10-15 02:10:23.405290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.509 [2024-10-15 02:10:23.405480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:14.509 [2024-10-15 02:10:23.405508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.262 ms 00:41:14.509 [2024-10-15 02:10:23.405524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.509 [2024-10-15 02:10:23.405570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.509 [2024-10-15 02:10:23.405592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:14.509 [2024-10-15 02:10:23.405608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:41:14.509 [2024-10-15 02:10:23.405626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.509 [2024-10-15 02:10:23.405668] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:41:14.509 [2024-10-15 02:10:23.405836] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:14.509 [2024-10-15 02:10:23.405853] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:14.509 [2024-10-15 02:10:23.405874] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:14.509 [2024-10-15 02:10:23.405887] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:14.509 [2024-10-15 02:10:23.405903] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:14.509 [2024-10-15 02:10:23.405915] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:41:14.509 [2024-10-15 02:10:23.405928] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:14.509 [2024-10-15 02:10:23.405950] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:14.509 [2024-10-15 02:10:23.405964] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:14.509 [2024-10-15 02:10:23.405990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.509 [2024-10-15 02:10:23.406003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:14.509 [2024-10-15 02:10:23.406014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:41:14.509 [2024-10-15 02:10:23.406026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.509 [2024-10-15 02:10:23.406108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.509 [2024-10-15 02:10:23.406129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:14.509 [2024-10-15 02:10:23.406141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:41:14.509 [2024-10-15 02:10:23.406152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.509 [2024-10-15 02:10:23.406245] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:14.509 [2024-10-15 02:10:23.406263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:14.509 [2024-10-15 02:10:23.406275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:14.509 [2024-10-15 02:10:23.406288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:14.509 [2024-10-15 02:10:23.406299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:14.509 [2024-10-15 02:10:23.406311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:14.509 [2024-10-15 02:10:23.406320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:41:14.509 [2024-10-15 02:10:23.406332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:14.509 [2024-10-15 02:10:23.406341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:41:14.509 [2024-10-15 02:10:23.406353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:14.509 [2024-10-15 02:10:23.406362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:14.509 [2024-10-15 02:10:23.406374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:41:14.509 [2024-10-15 02:10:23.406384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:14.509 [2024-10-15 02:10:23.406398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:14.509 [2024-10-15 02:10:23.406408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:41:14.509 [2024-10-15 02:10:23.406422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:14.509 [2024-10-15 02:10:23.406447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:14.509 [2024-10-15 02:10:23.406464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:41:14.509 [2024-10-15 02:10:23.406475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:14.509 [2024-10-15 02:10:23.406487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:14.509 [2024-10-15 02:10:23.406497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:41:14.509 [2024-10-15 02:10:23.406509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:14.509 [2024-10-15 02:10:23.406519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:14.509 [2024-10-15 02:10:23.406543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:41:14.509 [2024-10-15 02:10:23.406555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:14.509 [2024-10-15 02:10:23.406568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:14.509 [2024-10-15 02:10:23.406577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:41:14.509 [2024-10-15 02:10:23.406589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:14.509 [2024-10-15 02:10:23.406598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:14.509 [2024-10-15 02:10:23.406613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:41:14.509 [2024-10-15 02:10:23.406622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:14.509 [2024-10-15 02:10:23.406634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:14.510 [2024-10-15 02:10:23.406643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:41:14.510 [2024-10-15 02:10:23.406655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:14.510 [2024-10-15 02:10:23.406665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:14.510 [2024-10-15 02:10:23.406677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:41:14.510 [2024-10-15 02:10:23.406686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:14.510 [2024-10-15 02:10:23.406698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:14.510 [2024-10-15 02:10:23.406708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:41:14.510 [2024-10-15 02:10:23.406719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:14.510 [2024-10-15 02:10:23.406728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:14.510 [2024-10-15 02:10:23.406740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:41:14.510 [2024-10-15 02:10:23.406749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:14.510 [2024-10-15 02:10:23.406761] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:14.510 [2024-10-15 02:10:23.406774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:14.510 [2024-10-15 02:10:23.406789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:14.510 [2024-10-15 02:10:23.406799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:14.510 [2024-10-15 02:10:23.406816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:14.510 [2024-10-15 02:10:23.406827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:14.510 [2024-10-15 02:10:23.406840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:14.510 [2024-10-15 02:10:23.406850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:14.510 [2024-10-15 02:10:23.406861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:14.510 [2024-10-15 02:10:23.406871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:14.510 [2024-10-15 02:10:23.406888] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:14.510 [2024-10-15 02:10:23.406902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:14.510 [2024-10-15 02:10:23.406916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:41:14.510 [2024-10-15 02:10:23.406926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:41:14.510 [2024-10-15 02:10:23.406938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:41:14.510 [2024-10-15 02:10:23.406949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:41:14.510 [2024-10-15 02:10:23.406961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:41:14.510 [2024-10-15 02:10:23.406972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:41:14.510 [2024-10-15 02:10:23.406987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:41:14.510 [2024-10-15 02:10:23.406997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:41:14.510 [2024-10-15 02:10:23.407009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:41:14.510 [2024-10-15 02:10:23.407019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:41:14.510 [2024-10-15 02:10:23.407032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:41:14.510 [2024-10-15 02:10:23.407042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:41:14.510 [2024-10-15 02:10:23.407054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:41:14.510 [2024-10-15 02:10:23.407064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:41:14.510 [2024-10-15 02:10:23.407077] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:14.510 [2024-10-15 02:10:23.407089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:14.510 [2024-10-15 02:10:23.407103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:14.510 [2024-10-15 02:10:23.407113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:14.510 [2024-10-15 02:10:23.407127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:14.510 [2024-10-15 02:10:23.407138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:14.510 [2024-10-15 02:10:23.407152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.510 [2024-10-15 02:10:23.407163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:14.510 [2024-10-15 02:10:23.407179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:41:14.510 [2024-10-15 02:10:23.407189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.510 [2024-10-15 02:10:23.407251] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:41:14.510 [2024-10-15 02:10:23.407268] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:41:17.796 [2024-10-15 02:10:26.620922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.796 [2024-10-15 02:10:26.621005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:41:17.796 [2024-10-15 02:10:26.621048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3213.679 ms 00:41:17.796 [2024-10-15 02:10:26.621061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.796 [2024-10-15 02:10:26.662779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.796 [2024-10-15 02:10:26.662886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:17.796 [2024-10-15 02:10:26.662931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.430 ms 00:41:17.796 [2024-10-15 02:10:26.662948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.796 [2024-10-15 02:10:26.663156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.796 [2024-10-15 02:10:26.663175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:17.796 [2024-10-15 02:10:26.663191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:41:17.796 [2024-10-15 02:10:26.663206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.796 [2024-10-15 02:10:26.718432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.796 [2024-10-15 02:10:26.718511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:17.796 [2024-10-15 02:10:26.718555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.143 ms 00:41:17.796 [2024-10-15 02:10:26.718575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.796 [2024-10-15 02:10:26.718668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.796 [2024-10-15 02:10:26.718694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:17.796 [2024-10-15 02:10:26.718715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:41:17.796 [2024-10-15 02:10:26.718730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.796 [2024-10-15 02:10:26.719650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.796 [2024-10-15 02:10:26.719685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:17.796 [2024-10-15 02:10:26.719713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:41:17.796 [2024-10-15 02:10:26.719729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.796 [2024-10-15 02:10:26.719984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.796 [2024-10-15 02:10:26.720012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:17.796 [2024-10-15 02:10:26.720048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:41:17.796 [2024-10-15 02:10:26.720064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.796 [2024-10-15 02:10:26.742171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.796 [2024-10-15 02:10:26.742235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:17.796 [2024-10-15 02:10:26.742257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.055 ms 00:41:17.796 [2024-10-15 02:10:26.742269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.796 [2024-10-15 02:10:26.755325] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:41:17.796 [2024-10-15 02:10:26.760540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.796 [2024-10-15 02:10:26.760579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:17.796 [2024-10-15 02:10:26.760599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.165 ms 00:41:17.796 [2024-10-15 02:10:26.760613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:26.841412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.055 [2024-10-15 02:10:26.841612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:41:18.055 [2024-10-15 02:10:26.841644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.764 ms 00:41:18.055 [2024-10-15 02:10:26.841664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:26.841919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.055 [2024-10-15 02:10:26.841967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:18.055 [2024-10-15 02:10:26.841982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:41:18.055 [2024-10-15 02:10:26.841996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:26.867051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.055 [2024-10-15 02:10:26.867100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:41:18.055 [2024-10-15 02:10:26.867117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.992 ms 00:41:18.055 [2024-10-15 02:10:26.867131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:26.891330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.055 [2024-10-15 02:10:26.891537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:41:18.055 [2024-10-15 02:10:26.891564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.154 ms 00:41:18.055 [2024-10-15 02:10:26.891581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:26.892424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.055 [2024-10-15 02:10:26.892464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:18.055 [2024-10-15 02:10:26.892477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:41:18.055 [2024-10-15 02:10:26.892493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:26.971727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.055 [2024-10-15 02:10:26.971777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:41:18.055 [2024-10-15 02:10:26.971798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.191 ms 00:41:18.055 [2024-10-15 02:10:26.971812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:26.998776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.055 [2024-10-15 02:10:26.998826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:41:18.055 [2024-10-15 02:10:26.998843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.865 ms 00:41:18.055 [2024-10-15 02:10:26.998866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:27.023296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.055 [2024-10-15 02:10:27.023341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:41:18.055 [2024-10-15 02:10:27.023356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.386 ms 00:41:18.055 [2024-10-15 02:10:27.023369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:27.048322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.055 [2024-10-15 02:10:27.048369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:18.055 [2024-10-15 02:10:27.048385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.894 ms 00:41:18.055 [2024-10-15 02:10:27.048414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:27.048466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.055 [2024-10-15 02:10:27.048488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:18.055 [2024-10-15 02:10:27.048504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:41:18.055 [2024-10-15 02:10:27.048517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:27.048643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.055 [2024-10-15 02:10:27.048665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:18.055 [2024-10-15 02:10:27.048676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:41:18.055 [2024-10-15 02:10:27.048690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.055 [2024-10-15 02:10:27.050400] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3666.929 ms, result 0 00:41:18.055 { 00:41:18.055 "name": "ftl0", 00:41:18.055 "uuid": "3ef74bb9-2e50-4b4e-aca6-8d1079fe565a" 00:41:18.055 } 00:41:18.314 02:10:27 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:41:18.314 02:10:27 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:41:18.572 02:10:27 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:41:18.572 02:10:27 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:41:18.831 [2024-10-15 02:10:27.601228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.831 [2024-10-15 02:10:27.601278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:18.831 [2024-10-15 02:10:27.601302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:18.831 [2024-10-15 02:10:27.601315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.831 [2024-10-15 02:10:27.601366] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:18.831 [2024-10-15 02:10:27.604945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.831 [2024-10-15 02:10:27.604996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:18.831 [2024-10-15 02:10:27.605011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.556 ms 00:41:18.831 [2024-10-15 02:10:27.605024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.831 [2024-10-15 02:10:27.605292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.831 [2024-10-15 02:10:27.605316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:18.831 [2024-10-15 02:10:27.605328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:41:18.831 [2024-10-15 02:10:27.605341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.831 [2024-10-15 02:10:27.607946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.831 [2024-10-15 02:10:27.607983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:18.831 [2024-10-15 02:10:27.607997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:41:18.831 [2024-10-15 02:10:27.608014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.831 [2024-10-15 02:10:27.613331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.831 [2024-10-15 02:10:27.613524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:18.831 [2024-10-15 02:10:27.613550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.296 ms 00:41:18.831 [2024-10-15 02:10:27.613566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.831 [2024-10-15 02:10:27.639330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.831 [2024-10-15 02:10:27.639376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:18.831 [2024-10-15 02:10:27.639392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.684 ms 00:41:18.831 [2024-10-15 02:10:27.639415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.831 [2024-10-15 02:10:27.656048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.831 [2024-10-15 02:10:27.656084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:18.831 [2024-10-15 02:10:27.656099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.589 ms 00:41:18.831 [2024-10-15 02:10:27.656112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.831 [2024-10-15 02:10:27.656285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.831 [2024-10-15 02:10:27.656310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:18.831 [2024-10-15 02:10:27.656322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:41:18.831 [2024-10-15 02:10:27.656335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.831 [2024-10-15 02:10:27.681198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.831 [2024-10-15 02:10:27.681239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:18.831 [2024-10-15 02:10:27.681253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.842 ms 00:41:18.832 [2024-10-15 02:10:27.681266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.832 [2024-10-15 02:10:27.705565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.832 [2024-10-15 02:10:27.705607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:18.832 [2024-10-15 02:10:27.705622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.258 ms 00:41:18.832 [2024-10-15 02:10:27.705634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.832 [2024-10-15 02:10:27.729577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.832 [2024-10-15 02:10:27.729621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:18.832 [2024-10-15 02:10:27.729635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.902 ms 00:41:18.832 [2024-10-15 02:10:27.729647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.832 [2024-10-15 02:10:27.753493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.832 [2024-10-15 02:10:27.753536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:18.832 [2024-10-15 02:10:27.753550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.757 ms 00:41:18.832 [2024-10-15 02:10:27.753562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.832 [2024-10-15 02:10:27.753603] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:18.832 [2024-10-15 02:10:27.753630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.753994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:18.832 [2024-10-15 02:10:27.754662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:18.833 [2024-10-15 02:10:27.754877] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:18.833 [2024-10-15 02:10:27.754891] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ef74bb9-2e50-4b4e-aca6-8d1079fe565a 00:41:18.833 [2024-10-15 02:10:27.754904] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:18.833 [2024-10-15 02:10:27.754914] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:18.833 [2024-10-15 02:10:27.754926] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:18.833 [2024-10-15 02:10:27.754937] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:18.833 [2024-10-15 02:10:27.754949] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:18.833 [2024-10-15 02:10:27.754963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:18.833 [2024-10-15 02:10:27.754975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:18.833 [2024-10-15 02:10:27.754984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:18.833 [2024-10-15 02:10:27.754995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:18.833 [2024-10-15 02:10:27.755005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.833 [2024-10-15 02:10:27.755017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:18.833 [2024-10-15 02:10:27.755028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:41:18.833 [2024-10-15 02:10:27.755040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.833 [2024-10-15 02:10:27.770003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.833 [2024-10-15 02:10:27.770043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:18.833 [2024-10-15 02:10:27.770058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.922 ms 00:41:18.833 [2024-10-15 02:10:27.770075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.833 [2024-10-15 02:10:27.770575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:18.833 [2024-10-15 02:10:27.770633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:18.833 [2024-10-15 02:10:27.770648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:41:18.833 [2024-10-15 02:10:27.770662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.833 [2024-10-15 02:10:27.814069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:18.833 [2024-10-15 02:10:27.814112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:18.833 [2024-10-15 02:10:27.814127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:18.833 [2024-10-15 02:10:27.814145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.833 [2024-10-15 02:10:27.814208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:18.833 [2024-10-15 02:10:27.814226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:18.833 [2024-10-15 02:10:27.814239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:18.833 [2024-10-15 02:10:27.814252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.833 [2024-10-15 02:10:27.814360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:18.833 [2024-10-15 02:10:27.814387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:18.833 [2024-10-15 02:10:27.814400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:18.833 [2024-10-15 02:10:27.814436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.833 [2024-10-15 02:10:27.814465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:18.833 [2024-10-15 02:10:27.814481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:18.833 [2024-10-15 02:10:27.814492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:18.833 [2024-10-15 02:10:27.814504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:19.095 [2024-10-15 02:10:27.903687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:19.095 [2024-10-15 02:10:27.903760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:19.095 [2024-10-15 02:10:27.903778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:19.095 [2024-10-15 02:10:27.903798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:19.095 [2024-10-15 02:10:27.976495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:19.095 [2024-10-15 02:10:27.976559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:19.095 [2024-10-15 02:10:27.976577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:19.095 [2024-10-15 02:10:27.976592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:19.095 [2024-10-15 02:10:27.976736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:19.095 [2024-10-15 02:10:27.976763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:19.095 [2024-10-15 02:10:27.976776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:19.095 [2024-10-15 02:10:27.976791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:19.095 [2024-10-15 02:10:27.976891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:19.095 [2024-10-15 02:10:27.976912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:19.095 [2024-10-15 02:10:27.976924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:19.095 [2024-10-15 02:10:27.976938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:19.095 [2024-10-15 02:10:27.977097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:19.095 [2024-10-15 02:10:27.977121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:19.095 [2024-10-15 02:10:27.977134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:19.095 [2024-10-15 02:10:27.977148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:19.096 [2024-10-15 02:10:27.977200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:19.096 [2024-10-15 02:10:27.977226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:19.096 [2024-10-15 02:10:27.977238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:19.096 [2024-10-15 02:10:27.977251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:19.096 [2024-10-15 02:10:27.977311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:19.096 [2024-10-15 02:10:27.977332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:19.096 [2024-10-15 02:10:27.977344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:19.096 [2024-10-15 02:10:27.977358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:19.096 [2024-10-15 02:10:27.977435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:19.096 [2024-10-15 02:10:27.977479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:19.096 [2024-10-15 02:10:27.977495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:19.096 [2024-10-15 02:10:27.977509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:19.096 [2024-10-15 02:10:27.977700] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.417 ms, result 0 00:41:19.096 [2024-10-15 02:10:27.978828] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:41:19.096 true 00:41:19.096 02:10:28 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76839 00:41:19.096 02:10:28 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76839 ']' 00:41:19.096 02:10:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76839 00:41:19.096 02:10:28 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:41:19.096 02:10:28 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:19.096 02:10:28 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76839 00:41:19.096 02:10:28 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:19.096 killing process with pid 76839 00:41:19.096 02:10:28 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:19.096 02:10:28 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76839' 00:41:19.096 02:10:28 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 76839 00:41:19.096 02:10:28 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 76839 00:41:20.033 [2024-10-15 02:10:28.956876] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035015920 was disconnected and freed. delete nvme_qpair. 00:41:24.261 02:10:32 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:41:28.447 262144+0 records in 00:41:28.447 262144+0 records out 00:41:28.447 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.28287 s, 251 MB/s 00:41:28.447 02:10:37 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:41:30.348 02:10:38 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:30.348 [2024-10-15 02:10:38.985739] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:41:30.348 [2024-10-15 02:10:38.985879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77085 ] 00:41:30.348 [2024-10-15 02:10:39.154957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.607 [2024-10-15 02:10:39.406001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.865 [2024-10-15 02:10:39.751718] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:30.865 [2024-10-15 02:10:39.751810] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:31.126 [2024-10-15 02:10:39.924032] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:41:31.126 [2024-10-15 02:10:39.936424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.936464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:31.126 [2024-10-15 02:10:39.936496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:41:31.126 [2024-10-15 02:10:39.936508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.936579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.936598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:31.126 [2024-10-15 02:10:39.936612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:41:31.126 [2024-10-15 02:10:39.936623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.936653] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:31.126 [2024-10-15 02:10:39.937457] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:31.126 [2024-10-15 02:10:39.937499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.937513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:31.126 [2024-10-15 02:10:39.937527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:41:31.126 [2024-10-15 02:10:39.937550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.939844] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:41:31.126 [2024-10-15 02:10:39.954168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.954210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:41:31.126 [2024-10-15 02:10:39.954226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.325 ms 00:41:31.126 [2024-10-15 02:10:39.954239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.954312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.954332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:41:31.126 [2024-10-15 02:10:39.954346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:41:31.126 [2024-10-15 02:10:39.954356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.963712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.963747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:31.126 [2024-10-15 02:10:39.963763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.163 ms 00:41:31.126 [2024-10-15 02:10:39.963773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.963892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.963912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:31.126 [2024-10-15 02:10:39.963925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:41:31.126 [2024-10-15 02:10:39.963946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.963999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.964016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:31.126 [2024-10-15 02:10:39.964029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:41:31.126 [2024-10-15 02:10:39.964039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.964072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:31.126 [2024-10-15 02:10:39.968623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.968655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:31.126 [2024-10-15 02:10:39.968669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.561 ms 00:41:31.126 [2024-10-15 02:10:39.968681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.968716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.968730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:31.126 [2024-10-15 02:10:39.968742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:41:31.126 [2024-10-15 02:10:39.968765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.968827] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:41:31.126 [2024-10-15 02:10:39.968863] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:41:31.126 [2024-10-15 02:10:39.968934] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:41:31.126 [2024-10-15 02:10:39.968970] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:41:31.126 [2024-10-15 02:10:39.969069] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:31.126 [2024-10-15 02:10:39.969084] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:31.126 [2024-10-15 02:10:39.969111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:31.126 [2024-10-15 02:10:39.969128] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:31.126 [2024-10-15 02:10:39.969142] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:31.126 [2024-10-15 02:10:39.969155] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:41:31.126 [2024-10-15 02:10:39.969166] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:31.126 [2024-10-15 02:10:39.969177] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:31.126 [2024-10-15 02:10:39.969188] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:31.126 [2024-10-15 02:10:39.969200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.969211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:31.126 [2024-10-15 02:10:39.969223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:41:31.126 [2024-10-15 02:10:39.969233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.969335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.126 [2024-10-15 02:10:39.969350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:31.126 [2024-10-15 02:10:39.969363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:41:31.126 [2024-10-15 02:10:39.969374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.126 [2024-10-15 02:10:39.969525] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:31.126 [2024-10-15 02:10:39.969548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:31.126 [2024-10-15 02:10:39.969562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:31.126 [2024-10-15 02:10:39.969574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:31.126 [2024-10-15 02:10:39.969588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:31.126 [2024-10-15 02:10:39.969600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:31.126 [2024-10-15 02:10:39.969611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:41:31.126 [2024-10-15 02:10:39.969622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:31.126 [2024-10-15 02:10:39.969633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:41:31.126 [2024-10-15 02:10:39.969644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:31.126 [2024-10-15 02:10:39.969655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:31.126 [2024-10-15 02:10:39.969667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:41:31.126 [2024-10-15 02:10:39.969699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:31.126 [2024-10-15 02:10:39.969712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:31.126 [2024-10-15 02:10:39.969724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:41:31.126 [2024-10-15 02:10:39.969735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:31.126 [2024-10-15 02:10:39.969747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:31.126 [2024-10-15 02:10:39.969758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:41:31.126 [2024-10-15 02:10:39.969770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:31.126 [2024-10-15 02:10:39.969781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:31.126 [2024-10-15 02:10:39.969807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:41:31.126 [2024-10-15 02:10:39.969818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:31.126 [2024-10-15 02:10:39.969828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:31.126 [2024-10-15 02:10:39.969839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:41:31.126 [2024-10-15 02:10:39.969850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:31.126 [2024-10-15 02:10:39.969861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:31.126 [2024-10-15 02:10:39.969872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:41:31.126 [2024-10-15 02:10:39.969882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:31.126 [2024-10-15 02:10:39.969893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:31.126 [2024-10-15 02:10:39.969904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:41:31.126 [2024-10-15 02:10:39.969914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:31.126 [2024-10-15 02:10:39.969925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:31.126 [2024-10-15 02:10:39.969936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:41:31.126 [2024-10-15 02:10:39.969947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:31.126 [2024-10-15 02:10:39.969958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:31.126 [2024-10-15 02:10:39.969968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:41:31.126 [2024-10-15 02:10:39.969981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:31.126 [2024-10-15 02:10:39.969992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:31.126 [2024-10-15 02:10:39.970004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:41:31.126 [2024-10-15 02:10:39.970014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:31.126 [2024-10-15 02:10:39.970025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:31.126 [2024-10-15 02:10:39.970036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:41:31.126 [2024-10-15 02:10:39.970047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:31.127 [2024-10-15 02:10:39.970057] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:31.127 [2024-10-15 02:10:39.970081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:31.127 [2024-10-15 02:10:39.970093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:31.127 [2024-10-15 02:10:39.970104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:31.127 [2024-10-15 02:10:39.970117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:31.127 [2024-10-15 02:10:39.970128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:31.127 [2024-10-15 02:10:39.970139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:31.127 [2024-10-15 02:10:39.970150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:31.127 [2024-10-15 02:10:39.970160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:31.127 [2024-10-15 02:10:39.970171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:31.127 [2024-10-15 02:10:39.970184] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:31.127 [2024-10-15 02:10:39.970199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:31.127 [2024-10-15 02:10:39.970211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:41:31.127 [2024-10-15 02:10:39.970222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:41:31.127 [2024-10-15 02:10:39.970233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:41:31.127 [2024-10-15 02:10:39.970245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:41:31.127 [2024-10-15 02:10:39.970256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:41:31.127 [2024-10-15 02:10:39.970267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:41:31.127 [2024-10-15 02:10:39.970278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:41:31.127 [2024-10-15 02:10:39.970290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:41:31.127 [2024-10-15 02:10:39.970301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:41:31.127 [2024-10-15 02:10:39.970313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:41:31.127 [2024-10-15 02:10:39.970325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:41:31.127 [2024-10-15 02:10:39.970337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:41:31.127 [2024-10-15 02:10:39.970349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:41:31.127 [2024-10-15 02:10:39.970368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:41:31.127 [2024-10-15 02:10:39.970381] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:31.127 [2024-10-15 02:10:39.970393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:31.127 [2024-10-15 02:10:39.970421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:31.127 [2024-10-15 02:10:39.970435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:31.127 [2024-10-15 02:10:39.970447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:31.127 [2024-10-15 02:10:39.970458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:31.127 [2024-10-15 02:10:39.970471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.127 [2024-10-15 02:10:39.970483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:31.127 [2024-10-15 02:10:39.970495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:41:31.127 [2024-10-15 02:10:39.970519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.127 [2024-10-15 02:10:40.017661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.127 [2024-10-15 02:10:40.017727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:31.127 [2024-10-15 02:10:40.017748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.059 ms 00:41:31.127 [2024-10-15 02:10:40.017788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.127 [2024-10-15 02:10:40.017901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.127 [2024-10-15 02:10:40.017917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:31.127 [2024-10-15 02:10:40.017949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:41:31.127 [2024-10-15 02:10:40.017962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.127 [2024-10-15 02:10:40.060848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.127 [2024-10-15 02:10:40.060902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:31.127 [2024-10-15 02:10:40.060920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.792 ms 00:41:31.127 [2024-10-15 02:10:40.060931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.127 [2024-10-15 02:10:40.060991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.127 [2024-10-15 02:10:40.061005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:31.127 [2024-10-15 02:10:40.061018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:31.127 [2024-10-15 02:10:40.061028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.127 [2024-10-15 02:10:40.061725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.127 [2024-10-15 02:10:40.061750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:31.127 [2024-10-15 02:10:40.061777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:41:31.127 [2024-10-15 02:10:40.061789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.127 [2024-10-15 02:10:40.062008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.127 [2024-10-15 02:10:40.062033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:31.127 [2024-10-15 02:10:40.062047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:41:31.127 [2024-10-15 02:10:40.062058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.127 [2024-10-15 02:10:40.079948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.127 [2024-10-15 02:10:40.079999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:31.127 [2024-10-15 02:10:40.080015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.862 ms 00:41:31.127 [2024-10-15 02:10:40.080027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.127 [2024-10-15 02:10:40.094626] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:41:31.127 [2024-10-15 02:10:40.094671] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:41:31.127 [2024-10-15 02:10:40.094690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.127 [2024-10-15 02:10:40.094703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:41:31.127 [2024-10-15 02:10:40.094717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.528 ms 00:41:31.127 [2024-10-15 02:10:40.094744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.127 [2024-10-15 02:10:40.118483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.127 [2024-10-15 02:10:40.118519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:41:31.127 [2024-10-15 02:10:40.118543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.690 ms 00:41:31.127 [2024-10-15 02:10:40.118556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.127 [2024-10-15 02:10:40.131432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.127 [2024-10-15 02:10:40.131477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:41:31.127 [2024-10-15 02:10:40.131493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.833 ms 00:41:31.127 [2024-10-15 02:10:40.131504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.387 [2024-10-15 02:10:40.145097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.387 [2024-10-15 02:10:40.145150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:41:31.387 [2024-10-15 02:10:40.145166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.515 ms 00:41:31.387 [2024-10-15 02:10:40.145177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.387 [2024-10-15 02:10:40.146029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.387 [2024-10-15 02:10:40.146060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:31.387 [2024-10-15 02:10:40.146075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:41:31.387 [2024-10-15 02:10:40.146085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.387 [2024-10-15 02:10:40.211537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.387 [2024-10-15 02:10:40.211602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:41:31.387 [2024-10-15 02:10:40.211623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.427 ms 00:41:31.387 [2024-10-15 02:10:40.211634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.387 [2024-10-15 02:10:40.221662] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:41:31.387 [2024-10-15 02:10:40.223815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.387 [2024-10-15 02:10:40.223859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:31.387 [2024-10-15 02:10:40.223875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.127 ms 00:41:31.387 [2024-10-15 02:10:40.223893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.387 [2024-10-15 02:10:40.223990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.387 [2024-10-15 02:10:40.224010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:41:31.387 [2024-10-15 02:10:40.224024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:41:31.387 [2024-10-15 02:10:40.224034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.387 [2024-10-15 02:10:40.224177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.387 [2024-10-15 02:10:40.224207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:31.387 [2024-10-15 02:10:40.224223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:41:31.387 [2024-10-15 02:10:40.224234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.387 [2024-10-15 02:10:40.224274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.387 [2024-10-15 02:10:40.224290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:31.387 [2024-10-15 02:10:40.224303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:41:31.387 [2024-10-15 02:10:40.224315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.387 [2024-10-15 02:10:40.224359] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:41:31.387 [2024-10-15 02:10:40.224377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.387 [2024-10-15 02:10:40.224388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:41:31.387 [2024-10-15 02:10:40.224401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:41:31.387 [2024-10-15 02:10:40.224435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.387 [2024-10-15 02:10:40.249692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.387 [2024-10-15 02:10:40.249730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:31.387 [2024-10-15 02:10:40.249746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.213 ms 00:41:31.387 [2024-10-15 02:10:40.249758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.387 [2024-10-15 02:10:40.249839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.387 [2024-10-15 02:10:40.249856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:31.387 [2024-10-15 02:10:40.249869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:41:31.387 [2024-10-15 02:10:40.249880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.387 [2024-10-15 02:10:40.251468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 314.494 ms, result 0 00:41:32.323  [2024-10-15T02:10:42.271Z] Copying: 23/1024 [MB] (23 MBps) [2024-10-15T02:10:43.648Z] Copying: 47/1024 [MB] (23 MBps) [2024-10-15T02:10:44.584Z] Copying: 69/1024 [MB] (22 MBps) [2024-10-15T02:10:45.519Z] Copying: 91/1024 [MB] (22 MBps) [2024-10-15T02:10:46.454Z] Copying: 114/1024 [MB] (22 MBps) [2024-10-15T02:10:47.390Z] Copying: 138/1024 [MB] (23 MBps) [2024-10-15T02:10:48.326Z] Copying: 162/1024 [MB] (24 MBps) [2024-10-15T02:10:49.264Z] Copying: 186/1024 [MB] (23 MBps) [2024-10-15T02:10:50.641Z] Copying: 209/1024 [MB] (23 MBps) [2024-10-15T02:10:51.576Z] Copying: 232/1024 [MB] (23 MBps) [2024-10-15T02:10:52.513Z] Copying: 256/1024 [MB] (23 MBps) [2024-10-15T02:10:53.472Z] Copying: 279/1024 [MB] (23 MBps) [2024-10-15T02:10:54.446Z] Copying: 302/1024 [MB] (23 MBps) [2024-10-15T02:10:55.381Z] Copying: 325/1024 [MB] (22 MBps) [2024-10-15T02:10:56.316Z] Copying: 348/1024 [MB] (23 MBps) [2024-10-15T02:10:57.703Z] Copying: 372/1024 [MB] (23 MBps) [2024-10-15T02:10:58.269Z] Copying: 394/1024 [MB] (22 MBps) [2024-10-15T02:10:59.646Z] Copying: 418/1024 [MB] (23 MBps) [2024-10-15T02:11:00.580Z] Copying: 441/1024 [MB] (23 MBps) [2024-10-15T02:11:01.515Z] Copying: 465/1024 [MB] (23 MBps) [2024-10-15T02:11:02.450Z] Copying: 489/1024 [MB] (23 MBps) [2024-10-15T02:11:03.384Z] Copying: 513/1024 [MB] (23 MBps) [2024-10-15T02:11:04.319Z] Copying: 536/1024 [MB] (23 MBps) [2024-10-15T02:11:05.695Z] Copying: 559/1024 [MB] (23 MBps) [2024-10-15T02:11:06.632Z] Copying: 583/1024 [MB] (23 MBps) [2024-10-15T02:11:07.567Z] Copying: 606/1024 [MB] (23 MBps) [2024-10-15T02:11:08.519Z] Copying: 629/1024 [MB] (23 MBps) [2024-10-15T02:11:09.453Z] Copying: 652/1024 [MB] (22 MBps) [2024-10-15T02:11:10.388Z] Copying: 675/1024 [MB] (23 MBps) [2024-10-15T02:11:11.324Z] Copying: 698/1024 [MB] (23 MBps) [2024-10-15T02:11:12.698Z] Copying: 722/1024 [MB] (23 MBps) [2024-10-15T02:11:13.632Z] Copying: 745/1024 [MB] (23 MBps) [2024-10-15T02:11:14.612Z] Copying: 768/1024 [MB] (22 MBps) [2024-10-15T02:11:15.547Z] Copying: 790/1024 [MB] (22 MBps) [2024-10-15T02:11:16.482Z] Copying: 814/1024 [MB] (23 MBps) [2024-10-15T02:11:17.418Z] Copying: 838/1024 [MB] (23 MBps) [2024-10-15T02:11:18.353Z] Copying: 861/1024 [MB] (23 MBps) [2024-10-15T02:11:19.288Z] Copying: 885/1024 [MB] (23 MBps) [2024-10-15T02:11:20.664Z] Copying: 908/1024 [MB] (23 MBps) [2024-10-15T02:11:21.600Z] Copying: 931/1024 [MB] (23 MBps) [2024-10-15T02:11:22.536Z] Copying: 954/1024 [MB] (23 MBps) [2024-10-15T02:11:23.475Z] Copying: 977/1024 [MB] (23 MBps) [2024-10-15T02:11:24.411Z] Copying: 1000/1024 [MB] (22 MBps) [2024-10-15T02:11:24.411Z] Copying: 1023/1024 [MB] (23 MBps) [2024-10-15T02:11:24.411Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-10-15 02:11:24.275490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.399 [2024-10-15 02:11:24.275557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:15.399 [2024-10-15 02:11:24.275594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:42:15.399 [2024-10-15 02:11:24.275612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.399 [2024-10-15 02:11:24.275641] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:15.399 [2024-10-15 02:11:24.279495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.399 [2024-10-15 02:11:24.279529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:15.399 [2024-10-15 02:11:24.279544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.832 ms 00:42:15.399 [2024-10-15 02:11:24.279555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.399 [2024-10-15 02:11:24.281404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.399 [2024-10-15 02:11:24.281477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:15.399 [2024-10-15 02:11:24.281494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:42:15.399 [2024-10-15 02:11:24.281505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.399 [2024-10-15 02:11:24.296123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.399 [2024-10-15 02:11:24.296174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:15.399 [2024-10-15 02:11:24.296190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.574 ms 00:42:15.399 [2024-10-15 02:11:24.296202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.399 [2024-10-15 02:11:24.301417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.399 [2024-10-15 02:11:24.301443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:15.399 [2024-10-15 02:11:24.301456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.179 ms 00:42:15.399 [2024-10-15 02:11:24.301466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.399 [2024-10-15 02:11:24.326801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.399 [2024-10-15 02:11:24.326853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:15.399 [2024-10-15 02:11:24.326883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.288 ms 00:42:15.399 [2024-10-15 02:11:24.326894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.399 [2024-10-15 02:11:24.341991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.399 [2024-10-15 02:11:24.342032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:15.399 [2024-10-15 02:11:24.342047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.058 ms 00:42:15.399 [2024-10-15 02:11:24.342058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.399 [2024-10-15 02:11:24.342175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.399 [2024-10-15 02:11:24.342193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:15.399 [2024-10-15 02:11:24.342205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:42:15.399 [2024-10-15 02:11:24.342215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.399 [2024-10-15 02:11:24.367237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.399 [2024-10-15 02:11:24.367271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:15.399 [2024-10-15 02:11:24.367285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.005 ms 00:42:15.399 [2024-10-15 02:11:24.367295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.399 [2024-10-15 02:11:24.391941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.399 [2024-10-15 02:11:24.391975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:15.399 [2024-10-15 02:11:24.391989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.607 ms 00:42:15.399 [2024-10-15 02:11:24.391999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.659 [2024-10-15 02:11:24.416862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.659 [2024-10-15 02:11:24.416897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:15.659 [2024-10-15 02:11:24.416911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.825 ms 00:42:15.659 [2024-10-15 02:11:24.416922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.659 [2024-10-15 02:11:24.441325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.659 [2024-10-15 02:11:24.441368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:15.659 [2024-10-15 02:11:24.441383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.308 ms 00:42:15.659 [2024-10-15 02:11:24.441392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.659 [2024-10-15 02:11:24.441439] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:15.659 [2024-10-15 02:11:24.441462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.441993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:15.659 [2024-10-15 02:11:24.442300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:15.660 [2024-10-15 02:11:24.442718] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:15.660 [2024-10-15 02:11:24.442729] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ef74bb9-2e50-4b4e-aca6-8d1079fe565a 00:42:15.660 [2024-10-15 02:11:24.442741] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:42:15.660 [2024-10-15 02:11:24.442751] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:42:15.660 [2024-10-15 02:11:24.442762] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:42:15.660 [2024-10-15 02:11:24.442774] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:42:15.660 [2024-10-15 02:11:24.442785] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:15.660 [2024-10-15 02:11:24.442796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:15.660 [2024-10-15 02:11:24.442824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:15.660 [2024-10-15 02:11:24.442835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:15.660 [2024-10-15 02:11:24.442845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:15.660 [2024-10-15 02:11:24.442857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.660 [2024-10-15 02:11:24.442899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:15.660 [2024-10-15 02:11:24.442912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:42:15.660 [2024-10-15 02:11:24.442924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.457243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.660 [2024-10-15 02:11:24.457275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:15.660 [2024-10-15 02:11:24.457290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.295 ms 00:42:15.660 [2024-10-15 02:11:24.457314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.457835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.660 [2024-10-15 02:11:24.457861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:15.660 [2024-10-15 02:11:24.457875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:42:15.660 [2024-10-15 02:11:24.457886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.489439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.489475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:15.660 [2024-10-15 02:11:24.489489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.660 [2024-10-15 02:11:24.489506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.489559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.489573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:15.660 [2024-10-15 02:11:24.489584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.660 [2024-10-15 02:11:24.489594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.489676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.489695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:15.660 [2024-10-15 02:11:24.489707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.660 [2024-10-15 02:11:24.489718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.489744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.489757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:15.660 [2024-10-15 02:11:24.489769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.660 [2024-10-15 02:11:24.489779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.573789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.573842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:15.660 [2024-10-15 02:11:24.573859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.660 [2024-10-15 02:11:24.573877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.642716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.642761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:15.660 [2024-10-15 02:11:24.642777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.660 [2024-10-15 02:11:24.642790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.642857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.642872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:15.660 [2024-10-15 02:11:24.642883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.660 [2024-10-15 02:11:24.642894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.642957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.642980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:15.660 [2024-10-15 02:11:24.642992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.660 [2024-10-15 02:11:24.643002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.643113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.643133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:15.660 [2024-10-15 02:11:24.643146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.660 [2024-10-15 02:11:24.643156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.643199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.643221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:15.660 [2024-10-15 02:11:24.643233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.660 [2024-10-15 02:11:24.643244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.643285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.643300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:15.660 [2024-10-15 02:11:24.643311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.660 [2024-10-15 02:11:24.643321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.660 [2024-10-15 02:11:24.643369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:15.660 [2024-10-15 02:11:24.643390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:15.661 [2024-10-15 02:11:24.643421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:15.661 [2024-10-15 02:11:24.643469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.661 [2024-10-15 02:11:24.643605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.082 ms, result 0 00:42:15.661 [2024-10-15 02:11:24.644640] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:42:15.661 [2024-10-15 02:11:24.647881] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:42:17.035 00:42:17.035 00:42:17.035 02:11:25 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:42:17.035 [2024-10-15 02:11:25.747670] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:42:17.035 [2024-10-15 02:11:25.747900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77552 ] 00:42:17.035 [2024-10-15 02:11:25.915885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.293 [2024-10-15 02:11:26.111816] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:42:17.552 [2024-10-15 02:11:26.430054] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:17.552 [2024-10-15 02:11:26.430142] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:17.811 [2024-10-15 02:11:26.576666] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:42:17.811 [2024-10-15 02:11:26.589357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.811 [2024-10-15 02:11:26.589399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:17.811 [2024-10-15 02:11:26.589452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:17.811 [2024-10-15 02:11:26.589469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.811 [2024-10-15 02:11:26.589532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.811 [2024-10-15 02:11:26.589550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:17.811 [2024-10-15 02:11:26.589578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:42:17.811 [2024-10-15 02:11:26.589589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.811 [2024-10-15 02:11:26.589617] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:17.811 [2024-10-15 02:11:26.590471] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:17.811 [2024-10-15 02:11:26.590512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.811 [2024-10-15 02:11:26.590525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:17.811 [2024-10-15 02:11:26.590562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:42:17.811 [2024-10-15 02:11:26.590591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.811 [2024-10-15 02:11:26.592550] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:42:17.811 [2024-10-15 02:11:26.606394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.811 [2024-10-15 02:11:26.606458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:42:17.811 [2024-10-15 02:11:26.606491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.846 ms 00:42:17.811 [2024-10-15 02:11:26.606503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.811 [2024-10-15 02:11:26.606624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.811 [2024-10-15 02:11:26.606645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:42:17.811 [2024-10-15 02:11:26.606659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:42:17.811 [2024-10-15 02:11:26.606670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.811 [2024-10-15 02:11:26.615171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.811 [2024-10-15 02:11:26.615210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:17.811 [2024-10-15 02:11:26.615240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.421 ms 00:42:17.811 [2024-10-15 02:11:26.615252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.811 [2024-10-15 02:11:26.615360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.811 [2024-10-15 02:11:26.615380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:17.811 [2024-10-15 02:11:26.615393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:42:17.811 [2024-10-15 02:11:26.615404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.811 [2024-10-15 02:11:26.615508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.811 [2024-10-15 02:11:26.615532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:17.811 [2024-10-15 02:11:26.615545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:42:17.811 [2024-10-15 02:11:26.615556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.811 [2024-10-15 02:11:26.615588] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:17.811 [2024-10-15 02:11:26.619918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.811 [2024-10-15 02:11:26.619953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:17.811 [2024-10-15 02:11:26.619982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.338 ms 00:42:17.811 [2024-10-15 02:11:26.619993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.811 [2024-10-15 02:11:26.620027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.811 [2024-10-15 02:11:26.620042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:17.811 [2024-10-15 02:11:26.620054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:17.811 [2024-10-15 02:11:26.620070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.811 [2024-10-15 02:11:26.620165] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:42:17.811 [2024-10-15 02:11:26.620198] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:42:17.811 [2024-10-15 02:11:26.620239] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:42:17.811 [2024-10-15 02:11:26.620259] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:42:17.811 [2024-10-15 02:11:26.620357] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:17.811 [2024-10-15 02:11:26.620372] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:17.811 [2024-10-15 02:11:26.620391] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:17.811 [2024-10-15 02:11:26.620406] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:17.811 [2024-10-15 02:11:26.620434] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:17.811 [2024-10-15 02:11:26.620446] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:17.812 [2024-10-15 02:11:26.620457] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:17.812 [2024-10-15 02:11:26.620483] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:17.812 [2024-10-15 02:11:26.620496] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:17.812 [2024-10-15 02:11:26.620509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.812 [2024-10-15 02:11:26.620520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:17.812 [2024-10-15 02:11:26.620532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:42:17.812 [2024-10-15 02:11:26.620543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.812 [2024-10-15 02:11:26.620641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.812 [2024-10-15 02:11:26.620658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:17.812 [2024-10-15 02:11:26.620670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:42:17.812 [2024-10-15 02:11:26.620681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.812 [2024-10-15 02:11:26.620791] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:17.812 [2024-10-15 02:11:26.620835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:17.812 [2024-10-15 02:11:26.620850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:17.812 [2024-10-15 02:11:26.620862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:17.812 [2024-10-15 02:11:26.620873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:17.812 [2024-10-15 02:11:26.620883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:17.812 [2024-10-15 02:11:26.620893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:17.812 [2024-10-15 02:11:26.620903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:17.812 [2024-10-15 02:11:26.620913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:17.812 [2024-10-15 02:11:26.620923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:17.812 [2024-10-15 02:11:26.620933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:17.812 [2024-10-15 02:11:26.620943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:17.812 [2024-10-15 02:11:26.620966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:17.812 [2024-10-15 02:11:26.620977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:17.812 [2024-10-15 02:11:26.620988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:17.812 [2024-10-15 02:11:26.621000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:17.812 [2024-10-15 02:11:26.621010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:17.812 [2024-10-15 02:11:26.621021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:17.812 [2024-10-15 02:11:26.621031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:17.812 [2024-10-15 02:11:26.621042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:17.812 [2024-10-15 02:11:26.621052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:17.812 [2024-10-15 02:11:26.621062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:17.812 [2024-10-15 02:11:26.621072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:17.812 [2024-10-15 02:11:26.621083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:17.812 [2024-10-15 02:11:26.621093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:17.812 [2024-10-15 02:11:26.621103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:17.812 [2024-10-15 02:11:26.621113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:17.812 [2024-10-15 02:11:26.621123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:17.812 [2024-10-15 02:11:26.621133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:17.812 [2024-10-15 02:11:26.621143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:17.812 [2024-10-15 02:11:26.621153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:17.812 [2024-10-15 02:11:26.621163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:17.812 [2024-10-15 02:11:26.621173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:17.812 [2024-10-15 02:11:26.621183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:17.812 [2024-10-15 02:11:26.621193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:17.812 [2024-10-15 02:11:26.621204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:17.812 [2024-10-15 02:11:26.621214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:17.812 [2024-10-15 02:11:26.621224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:17.812 [2024-10-15 02:11:26.621234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:17.812 [2024-10-15 02:11:26.621245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:17.812 [2024-10-15 02:11:26.621255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:17.812 [2024-10-15 02:11:26.621265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:17.812 [2024-10-15 02:11:26.621276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:17.812 [2024-10-15 02:11:26.621286] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:17.812 [2024-10-15 02:11:26.621303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:17.812 [2024-10-15 02:11:26.621314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:17.812 [2024-10-15 02:11:26.621325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:17.812 [2024-10-15 02:11:26.621337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:17.812 [2024-10-15 02:11:26.621348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:17.812 [2024-10-15 02:11:26.621359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:17.812 [2024-10-15 02:11:26.621370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:17.812 [2024-10-15 02:11:26.621380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:17.812 [2024-10-15 02:11:26.621390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:17.812 [2024-10-15 02:11:26.621418] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:17.812 [2024-10-15 02:11:26.621452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:17.812 [2024-10-15 02:11:26.621464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:17.812 [2024-10-15 02:11:26.621476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:17.812 [2024-10-15 02:11:26.621487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:17.812 [2024-10-15 02:11:26.621498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:17.812 [2024-10-15 02:11:26.621509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:17.812 [2024-10-15 02:11:26.621521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:17.812 [2024-10-15 02:11:26.621532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:17.812 [2024-10-15 02:11:26.621544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:17.812 [2024-10-15 02:11:26.621555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:17.812 [2024-10-15 02:11:26.621566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:17.812 [2024-10-15 02:11:26.621577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:17.812 [2024-10-15 02:11:26.621588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:17.812 [2024-10-15 02:11:26.621599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:17.812 [2024-10-15 02:11:26.621610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:17.812 [2024-10-15 02:11:26.621621] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:17.812 [2024-10-15 02:11:26.621634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:17.812 [2024-10-15 02:11:26.621646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:17.812 [2024-10-15 02:11:26.621658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:17.812 [2024-10-15 02:11:26.621669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:17.812 [2024-10-15 02:11:26.621680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:17.812 [2024-10-15 02:11:26.621693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.812 [2024-10-15 02:11:26.621705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:17.812 [2024-10-15 02:11:26.621717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:42:17.812 [2024-10-15 02:11:26.621727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.812 [2024-10-15 02:11:26.666030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.812 [2024-10-15 02:11:26.666134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:17.812 [2024-10-15 02:11:26.666177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.215 ms 00:42:17.812 [2024-10-15 02:11:26.666196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.812 [2024-10-15 02:11:26.666378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.812 [2024-10-15 02:11:26.666416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:17.812 [2024-10-15 02:11:26.666436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:42:17.812 [2024-10-15 02:11:26.666447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.812 [2024-10-15 02:11:26.703923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.812 [2024-10-15 02:11:26.703972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:17.812 [2024-10-15 02:11:26.704004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.253 ms 00:42:17.812 [2024-10-15 02:11:26.704015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.812 [2024-10-15 02:11:26.704065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.812 [2024-10-15 02:11:26.704082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:17.812 [2024-10-15 02:11:26.704094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:42:17.812 [2024-10-15 02:11:26.704104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.812 [2024-10-15 02:11:26.704781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.813 [2024-10-15 02:11:26.704835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:17.813 [2024-10-15 02:11:26.704849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:42:17.813 [2024-10-15 02:11:26.704860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.813 [2024-10-15 02:11:26.705042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.813 [2024-10-15 02:11:26.705066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:17.813 [2024-10-15 02:11:26.705080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:42:17.813 [2024-10-15 02:11:26.705091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.813 [2024-10-15 02:11:26.720751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.813 [2024-10-15 02:11:26.720786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:17.813 [2024-10-15 02:11:26.720817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.633 ms 00:42:17.813 [2024-10-15 02:11:26.720828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.813 [2024-10-15 02:11:26.734927] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:42:17.813 [2024-10-15 02:11:26.734966] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:42:17.813 [2024-10-15 02:11:26.735008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.813 [2024-10-15 02:11:26.735020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:42:17.813 [2024-10-15 02:11:26.735032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.019 ms 00:42:17.813 [2024-10-15 02:11:26.735047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.813 [2024-10-15 02:11:26.758683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.813 [2024-10-15 02:11:26.758740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:42:17.813 [2024-10-15 02:11:26.758778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.583 ms 00:42:17.813 [2024-10-15 02:11:26.758790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.813 [2024-10-15 02:11:26.771434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.813 [2024-10-15 02:11:26.771472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:42:17.813 [2024-10-15 02:11:26.771502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.591 ms 00:42:17.813 [2024-10-15 02:11:26.771526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.813 [2024-10-15 02:11:26.783869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.813 [2024-10-15 02:11:26.783907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:42:17.813 [2024-10-15 02:11:26.783938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.301 ms 00:42:17.813 [2024-10-15 02:11:26.783948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:17.813 [2024-10-15 02:11:26.784694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:17.813 [2024-10-15 02:11:26.784743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:17.813 [2024-10-15 02:11:26.784774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:42:17.813 [2024-10-15 02:11:26.784785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.072 [2024-10-15 02:11:26.849587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.072 [2024-10-15 02:11:26.849664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:42:18.072 [2024-10-15 02:11:26.849700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.762 ms 00:42:18.072 [2024-10-15 02:11:26.849712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.072 [2024-10-15 02:11:26.859554] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:42:18.072 [2024-10-15 02:11:26.861616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.072 [2024-10-15 02:11:26.861650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:18.072 [2024-10-15 02:11:26.861680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.834 ms 00:42:18.072 [2024-10-15 02:11:26.861696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.072 [2024-10-15 02:11:26.861795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.072 [2024-10-15 02:11:26.861815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:42:18.072 [2024-10-15 02:11:26.861828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:42:18.072 [2024-10-15 02:11:26.861839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.072 [2024-10-15 02:11:26.861984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.072 [2024-10-15 02:11:26.862007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:18.072 [2024-10-15 02:11:26.862019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:42:18.072 [2024-10-15 02:11:26.862037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.072 [2024-10-15 02:11:26.862083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.072 [2024-10-15 02:11:26.862100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:18.072 [2024-10-15 02:11:26.862113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:42:18.072 [2024-10-15 02:11:26.862124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.072 [2024-10-15 02:11:26.862199] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:42:18.072 [2024-10-15 02:11:26.862220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.072 [2024-10-15 02:11:26.862248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:42:18.072 [2024-10-15 02:11:26.862260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:42:18.072 [2024-10-15 02:11:26.862276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.072 [2024-10-15 02:11:26.887686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.072 [2024-10-15 02:11:26.887728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:18.072 [2024-10-15 02:11:26.887761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.380 ms 00:42:18.072 [2024-10-15 02:11:26.887773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.072 [2024-10-15 02:11:26.887864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.072 [2024-10-15 02:11:26.887898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:18.072 [2024-10-15 02:11:26.887915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:42:18.072 [2024-10-15 02:11:26.887926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.072 [2024-10-15 02:11:26.889606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.569 ms, result 0 00:42:19.458  [2024-10-15T02:11:29.405Z] Copying: 22/1024 [MB] (22 MBps) [2024-10-15T02:11:30.341Z] Copying: 44/1024 [MB] (22 MBps) [2024-10-15T02:11:31.278Z] Copying: 66/1024 [MB] (22 MBps) [2024-10-15T02:11:32.215Z] Copying: 88/1024 [MB] (21 MBps) [2024-10-15T02:11:33.151Z] Copying: 110/1024 [MB] (22 MBps) [2024-10-15T02:11:34.125Z] Copying: 132/1024 [MB] (22 MBps) [2024-10-15T02:11:35.503Z] Copying: 154/1024 [MB] (22 MBps) [2024-10-15T02:11:36.071Z] Copying: 176/1024 [MB] (21 MBps) [2024-10-15T02:11:37.450Z] Copying: 198/1024 [MB] (21 MBps) [2024-10-15T02:11:38.388Z] Copying: 221/1024 [MB] (22 MBps) [2024-10-15T02:11:39.325Z] Copying: 243/1024 [MB] (22 MBps) [2024-10-15T02:11:40.262Z] Copying: 265/1024 [MB] (22 MBps) [2024-10-15T02:11:41.199Z] Copying: 288/1024 [MB] (22 MBps) [2024-10-15T02:11:42.136Z] Copying: 310/1024 [MB] (22 MBps) [2024-10-15T02:11:43.072Z] Copying: 332/1024 [MB] (21 MBps) [2024-10-15T02:11:44.451Z] Copying: 354/1024 [MB] (22 MBps) [2024-10-15T02:11:45.386Z] Copying: 377/1024 [MB] (22 MBps) [2024-10-15T02:11:46.321Z] Copying: 400/1024 [MB] (22 MBps) [2024-10-15T02:11:47.255Z] Copying: 423/1024 [MB] (23 MBps) [2024-10-15T02:11:48.191Z] Copying: 446/1024 [MB] (22 MBps) [2024-10-15T02:11:49.124Z] Copying: 468/1024 [MB] (22 MBps) [2024-10-15T02:11:50.501Z] Copying: 491/1024 [MB] (22 MBps) [2024-10-15T02:11:51.069Z] Copying: 513/1024 [MB] (22 MBps) [2024-10-15T02:11:52.445Z] Copying: 536/1024 [MB] (22 MBps) [2024-10-15T02:11:53.382Z] Copying: 560/1024 [MB] (23 MBps) [2024-10-15T02:11:54.378Z] Copying: 583/1024 [MB] (23 MBps) [2024-10-15T02:11:55.318Z] Copying: 606/1024 [MB] (23 MBps) [2024-10-15T02:11:56.255Z] Copying: 629/1024 [MB] (23 MBps) [2024-10-15T02:11:57.191Z] Copying: 652/1024 [MB] (22 MBps) [2024-10-15T02:11:58.126Z] Copying: 675/1024 [MB] (22 MBps) [2024-10-15T02:11:59.503Z] Copying: 698/1024 [MB] (22 MBps) [2024-10-15T02:12:00.070Z] Copying: 721/1024 [MB] (22 MBps) [2024-10-15T02:12:01.447Z] Copying: 744/1024 [MB] (22 MBps) [2024-10-15T02:12:02.383Z] Copying: 767/1024 [MB] (23 MBps) [2024-10-15T02:12:03.317Z] Copying: 790/1024 [MB] (23 MBps) [2024-10-15T02:12:04.251Z] Copying: 814/1024 [MB] (23 MBps) [2024-10-15T02:12:05.188Z] Copying: 837/1024 [MB] (23 MBps) [2024-10-15T02:12:06.124Z] Copying: 861/1024 [MB] (23 MBps) [2024-10-15T02:12:07.501Z] Copying: 884/1024 [MB] (23 MBps) [2024-10-15T02:12:08.435Z] Copying: 907/1024 [MB] (22 MBps) [2024-10-15T02:12:09.369Z] Copying: 930/1024 [MB] (22 MBps) [2024-10-15T02:12:10.304Z] Copying: 953/1024 [MB] (23 MBps) [2024-10-15T02:12:11.239Z] Copying: 976/1024 [MB] (22 MBps) [2024-10-15T02:12:12.173Z] Copying: 999/1024 [MB] (23 MBps) [2024-10-15T02:12:12.173Z] Copying: 1022/1024 [MB] (23 MBps) [2024-10-15T02:12:12.433Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-10-15 02:12:12.233786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.421 [2024-10-15 02:12:12.234176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:03.421 [2024-10-15 02:12:12.234232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:43:03.421 [2024-10-15 02:12:12.234253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.421 [2024-10-15 02:12:12.234309] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:03.421 [2024-10-15 02:12:12.239345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.421 [2024-10-15 02:12:12.239380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:03.421 [2024-10-15 02:12:12.239394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.002 ms 00:43:03.421 [2024-10-15 02:12:12.239415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.421 [2024-10-15 02:12:12.239628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.421 [2024-10-15 02:12:12.239645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:03.421 [2024-10-15 02:12:12.239661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:43:03.421 [2024-10-15 02:12:12.239672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.421 [2024-10-15 02:12:12.243432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.421 [2024-10-15 02:12:12.243467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:03.421 [2024-10-15 02:12:12.243481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:43:03.421 [2024-10-15 02:12:12.243491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.421 [2024-10-15 02:12:12.248639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.422 [2024-10-15 02:12:12.248666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:03.422 [2024-10-15 02:12:12.248679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.126 ms 00:43:03.422 [2024-10-15 02:12:12.248694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.422 [2024-10-15 02:12:12.273888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.422 [2024-10-15 02:12:12.273925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:03.422 [2024-10-15 02:12:12.273940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.131 ms 00:43:03.422 [2024-10-15 02:12:12.273950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.422 [2024-10-15 02:12:12.288796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.422 [2024-10-15 02:12:12.288835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:03.422 [2024-10-15 02:12:12.288850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.808 ms 00:43:03.422 [2024-10-15 02:12:12.288861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.422 [2024-10-15 02:12:12.289007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.422 [2024-10-15 02:12:12.289029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:03.422 [2024-10-15 02:12:12.289041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:43:03.422 [2024-10-15 02:12:12.289051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.422 [2024-10-15 02:12:12.313559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.422 [2024-10-15 02:12:12.313597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:03.422 [2024-10-15 02:12:12.313612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.488 ms 00:43:03.422 [2024-10-15 02:12:12.313622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.422 [2024-10-15 02:12:12.337636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.422 [2024-10-15 02:12:12.337673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:03.422 [2024-10-15 02:12:12.337686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.977 ms 00:43:03.422 [2024-10-15 02:12:12.337696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.422 [2024-10-15 02:12:12.361267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.422 [2024-10-15 02:12:12.361303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:03.422 [2024-10-15 02:12:12.361317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.522 ms 00:43:03.422 [2024-10-15 02:12:12.361327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.422 [2024-10-15 02:12:12.386057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.422 [2024-10-15 02:12:12.386093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:03.422 [2024-10-15 02:12:12.386107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.669 ms 00:43:03.422 [2024-10-15 02:12:12.386117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.422 [2024-10-15 02:12:12.386154] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:03.422 [2024-10-15 02:12:12.386192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.386985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.387012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:03.422 [2024-10-15 02:12:12.387022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:03.423 [2024-10-15 02:12:12.387462] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:03.423 [2024-10-15 02:12:12.387489] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ef74bb9-2e50-4b4e-aca6-8d1079fe565a 00:43:03.423 [2024-10-15 02:12:12.387502] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:03.423 [2024-10-15 02:12:12.387512] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:03.423 [2024-10-15 02:12:12.387522] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:03.423 [2024-10-15 02:12:12.387533] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:03.423 [2024-10-15 02:12:12.387550] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:03.423 [2024-10-15 02:12:12.387561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:03.423 [2024-10-15 02:12:12.387572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:03.423 [2024-10-15 02:12:12.387582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:03.423 [2024-10-15 02:12:12.387591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:03.423 [2024-10-15 02:12:12.387612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.423 [2024-10-15 02:12:12.387648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:03.423 [2024-10-15 02:12:12.387660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:43:03.423 [2024-10-15 02:12:12.387671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.423 [2024-10-15 02:12:12.402083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.423 [2024-10-15 02:12:12.402114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:03.423 [2024-10-15 02:12:12.402135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.353 ms 00:43:03.423 [2024-10-15 02:12:12.402145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.423 [2024-10-15 02:12:12.402618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:03.423 [2024-10-15 02:12:12.402646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:03.423 [2024-10-15 02:12:12.402676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:43:03.423 [2024-10-15 02:12:12.402688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.682 [2024-10-15 02:12:12.434816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.682 [2024-10-15 02:12:12.434872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:03.682 [2024-10-15 02:12:12.434903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.682 [2024-10-15 02:12:12.434913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.434965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.683 [2024-10-15 02:12:12.434978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:03.683 [2024-10-15 02:12:12.434990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.683 [2024-10-15 02:12:12.435001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.435087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.683 [2024-10-15 02:12:12.435118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:03.683 [2024-10-15 02:12:12.435137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.683 [2024-10-15 02:12:12.435148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.435171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.683 [2024-10-15 02:12:12.435184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:03.683 [2024-10-15 02:12:12.435195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.683 [2024-10-15 02:12:12.435205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.519674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.683 [2024-10-15 02:12:12.519731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:03.683 [2024-10-15 02:12:12.519747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.683 [2024-10-15 02:12:12.519758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.588760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.683 [2024-10-15 02:12:12.588808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:03.683 [2024-10-15 02:12:12.588824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.683 [2024-10-15 02:12:12.588835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.588925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.683 [2024-10-15 02:12:12.588941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:03.683 [2024-10-15 02:12:12.588952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.683 [2024-10-15 02:12:12.588968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.589008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.683 [2024-10-15 02:12:12.589022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:03.683 [2024-10-15 02:12:12.589034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.683 [2024-10-15 02:12:12.589044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.589146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.683 [2024-10-15 02:12:12.589164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:03.683 [2024-10-15 02:12:12.589176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.683 [2024-10-15 02:12:12.589187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.589237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.683 [2024-10-15 02:12:12.589253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:03.683 [2024-10-15 02:12:12.589264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.683 [2024-10-15 02:12:12.589274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.589316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.683 [2024-10-15 02:12:12.589331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:03.683 [2024-10-15 02:12:12.589342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.683 [2024-10-15 02:12:12.589351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.589440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:03.683 [2024-10-15 02:12:12.589458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:03.683 [2024-10-15 02:12:12.589470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:03.683 [2024-10-15 02:12:12.589481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:03.683 [2024-10-15 02:12:12.589686] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.842 ms, result 0 00:43:03.683 [2024-10-15 02:12:12.590855] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:43:03.683 [2024-10-15 02:12:12.593798] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:43:04.649 00:43:04.649 00:43:04.649 02:12:13 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:43:06.550 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:43:06.550 02:12:15 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:43:06.550 [2024-10-15 02:12:15.390462] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:43:06.550 [2024-10-15 02:12:15.390670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78054 ] 00:43:06.808 [2024-10-15 02:12:15.570640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:06.808 [2024-10-15 02:12:15.795557] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:43:07.376 [2024-10-15 02:12:16.097867] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:07.376 [2024-10-15 02:12:16.097927] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:07.377 [2024-10-15 02:12:16.245914] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:43:07.377 [2024-10-15 02:12:16.258709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.258746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:07.377 [2024-10-15 02:12:16.258762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:07.377 [2024-10-15 02:12:16.258776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.258837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.258860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:07.377 [2024-10-15 02:12:16.258871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:43:07.377 [2024-10-15 02:12:16.258882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.258907] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:07.377 [2024-10-15 02:12:16.259687] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:07.377 [2024-10-15 02:12:16.259720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.259732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:07.377 [2024-10-15 02:12:16.259744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:43:07.377 [2024-10-15 02:12:16.259755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.261751] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:43:07.377 [2024-10-15 02:12:16.275806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.275839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:43:07.377 [2024-10-15 02:12:16.275853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.056 ms 00:43:07.377 [2024-10-15 02:12:16.275863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.275922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.275939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:43:07.377 [2024-10-15 02:12:16.275951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:43:07.377 [2024-10-15 02:12:16.275960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.284523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.284556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:07.377 [2024-10-15 02:12:16.284570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.484 ms 00:43:07.377 [2024-10-15 02:12:16.284580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.284665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.284682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:07.377 [2024-10-15 02:12:16.284694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:43:07.377 [2024-10-15 02:12:16.284705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.284774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.284789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:07.377 [2024-10-15 02:12:16.284800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:07.377 [2024-10-15 02:12:16.284809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.284853] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:07.377 [2024-10-15 02:12:16.289227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.289254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:07.377 [2024-10-15 02:12:16.289267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.396 ms 00:43:07.377 [2024-10-15 02:12:16.289277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.289309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.289322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:07.377 [2024-10-15 02:12:16.289332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:07.377 [2024-10-15 02:12:16.289347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.289434] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:43:07.377 [2024-10-15 02:12:16.289487] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:43:07.377 [2024-10-15 02:12:16.289526] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:43:07.377 [2024-10-15 02:12:16.289545] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:43:07.377 [2024-10-15 02:12:16.289641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:07.377 [2024-10-15 02:12:16.289656] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:07.377 [2024-10-15 02:12:16.289675] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:07.377 [2024-10-15 02:12:16.289688] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:07.377 [2024-10-15 02:12:16.289701] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:07.377 [2024-10-15 02:12:16.289712] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:43:07.377 [2024-10-15 02:12:16.289723] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:07.377 [2024-10-15 02:12:16.289733] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:07.377 [2024-10-15 02:12:16.289744] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:07.377 [2024-10-15 02:12:16.289755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.289766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:07.377 [2024-10-15 02:12:16.289777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:43:07.377 [2024-10-15 02:12:16.289803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.289900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.377 [2024-10-15 02:12:16.289913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:07.377 [2024-10-15 02:12:16.289924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:43:07.377 [2024-10-15 02:12:16.289933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.377 [2024-10-15 02:12:16.290028] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:07.377 [2024-10-15 02:12:16.290045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:07.377 [2024-10-15 02:12:16.290056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:07.377 [2024-10-15 02:12:16.290066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:07.377 [2024-10-15 02:12:16.290085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:43:07.377 [2024-10-15 02:12:16.290105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:07.377 [2024-10-15 02:12:16.290114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:07.377 [2024-10-15 02:12:16.290132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:07.377 [2024-10-15 02:12:16.290141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:43:07.377 [2024-10-15 02:12:16.290161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:07.377 [2024-10-15 02:12:16.290171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:07.377 [2024-10-15 02:12:16.290182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:43:07.377 [2024-10-15 02:12:16.290191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:07.377 [2024-10-15 02:12:16.290209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:43:07.377 [2024-10-15 02:12:16.290233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:07.377 [2024-10-15 02:12:16.290252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:07.377 [2024-10-15 02:12:16.290272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:07.377 [2024-10-15 02:12:16.290281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:07.377 [2024-10-15 02:12:16.290299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:07.377 [2024-10-15 02:12:16.290308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:07.377 [2024-10-15 02:12:16.290325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:07.377 [2024-10-15 02:12:16.290334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:07.377 [2024-10-15 02:12:16.290352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:07.377 [2024-10-15 02:12:16.290361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:07.377 [2024-10-15 02:12:16.290379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:07.377 [2024-10-15 02:12:16.290388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:43:07.377 [2024-10-15 02:12:16.290398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:07.377 [2024-10-15 02:12:16.290406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:07.377 [2024-10-15 02:12:16.290431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:43:07.377 [2024-10-15 02:12:16.290440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:07.377 [2024-10-15 02:12:16.290460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:43:07.377 [2024-10-15 02:12:16.290470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:07.377 [2024-10-15 02:12:16.290499] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:07.378 [2024-10-15 02:12:16.290517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:07.378 [2024-10-15 02:12:16.290527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:07.378 [2024-10-15 02:12:16.290566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:07.378 [2024-10-15 02:12:16.290579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:07.378 [2024-10-15 02:12:16.290589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:07.378 [2024-10-15 02:12:16.290599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:07.378 [2024-10-15 02:12:16.290610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:07.378 [2024-10-15 02:12:16.290619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:07.378 [2024-10-15 02:12:16.290629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:07.378 [2024-10-15 02:12:16.290641] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:07.378 [2024-10-15 02:12:16.290657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:07.378 [2024-10-15 02:12:16.290668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:43:07.378 [2024-10-15 02:12:16.290679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:43:07.378 [2024-10-15 02:12:16.290690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:43:07.378 [2024-10-15 02:12:16.290700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:43:07.378 [2024-10-15 02:12:16.290711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:43:07.378 [2024-10-15 02:12:16.290721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:43:07.378 [2024-10-15 02:12:16.290733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:43:07.378 [2024-10-15 02:12:16.290743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:43:07.378 [2024-10-15 02:12:16.290754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:43:07.378 [2024-10-15 02:12:16.290764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:43:07.378 [2024-10-15 02:12:16.290774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:43:07.378 [2024-10-15 02:12:16.290799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:43:07.378 [2024-10-15 02:12:16.290810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:43:07.378 [2024-10-15 02:12:16.290820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:43:07.378 [2024-10-15 02:12:16.290830] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:07.378 [2024-10-15 02:12:16.290842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:07.378 [2024-10-15 02:12:16.290854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:07.378 [2024-10-15 02:12:16.290865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:07.378 [2024-10-15 02:12:16.290875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:07.378 [2024-10-15 02:12:16.290886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:07.378 [2024-10-15 02:12:16.290897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.378 [2024-10-15 02:12:16.290908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:07.378 [2024-10-15 02:12:16.290918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:43:07.378 [2024-10-15 02:12:16.290930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.378 [2024-10-15 02:12:16.332243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.378 [2024-10-15 02:12:16.332294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:07.378 [2024-10-15 02:12:16.332311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.233 ms 00:43:07.378 [2024-10-15 02:12:16.332327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.378 [2024-10-15 02:12:16.332441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.378 [2024-10-15 02:12:16.332458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:07.378 [2024-10-15 02:12:16.332469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:43:07.378 [2024-10-15 02:12:16.332479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.378 [2024-10-15 02:12:16.369045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.378 [2024-10-15 02:12:16.369089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:07.378 [2024-10-15 02:12:16.369104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.487 ms 00:43:07.378 [2024-10-15 02:12:16.369115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.378 [2024-10-15 02:12:16.369163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.378 [2024-10-15 02:12:16.369177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:07.378 [2024-10-15 02:12:16.369188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:43:07.378 [2024-10-15 02:12:16.369198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.378 [2024-10-15 02:12:16.369850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.378 [2024-10-15 02:12:16.369874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:07.378 [2024-10-15 02:12:16.369894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:43:07.378 [2024-10-15 02:12:16.369905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.378 [2024-10-15 02:12:16.370055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.378 [2024-10-15 02:12:16.370072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:07.378 [2024-10-15 02:12:16.370083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:43:07.378 [2024-10-15 02:12:16.370093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.378 [2024-10-15 02:12:16.385717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.378 [2024-10-15 02:12:16.385765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:07.378 [2024-10-15 02:12:16.385781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.599 ms 00:43:07.378 [2024-10-15 02:12:16.385791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.637 [2024-10-15 02:12:16.400085] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:43:07.637 [2024-10-15 02:12:16.400121] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:43:07.637 [2024-10-15 02:12:16.400136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.637 [2024-10-15 02:12:16.400147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:43:07.637 [2024-10-15 02:12:16.400159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.210 ms 00:43:07.637 [2024-10-15 02:12:16.400168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.637 [2024-10-15 02:12:16.423331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.637 [2024-10-15 02:12:16.423366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:43:07.637 [2024-10-15 02:12:16.423380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.121 ms 00:43:07.637 [2024-10-15 02:12:16.423391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.637 [2024-10-15 02:12:16.435648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.637 [2024-10-15 02:12:16.435683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:43:07.637 [2024-10-15 02:12:16.435696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.216 ms 00:43:07.637 [2024-10-15 02:12:16.435719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.637 [2024-10-15 02:12:16.447704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.637 [2024-10-15 02:12:16.447736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:43:07.637 [2024-10-15 02:12:16.447749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.947 ms 00:43:07.637 [2024-10-15 02:12:16.447759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.637 [2024-10-15 02:12:16.448374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.637 [2024-10-15 02:12:16.448399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:07.637 [2024-10-15 02:12:16.448425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:43:07.637 [2024-10-15 02:12:16.448452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.637 [2024-10-15 02:12:16.512617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.637 [2024-10-15 02:12:16.512684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:43:07.637 [2024-10-15 02:12:16.512701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.140 ms 00:43:07.637 [2024-10-15 02:12:16.512712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.637 [2024-10-15 02:12:16.522302] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:43:07.637 [2024-10-15 02:12:16.524311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.637 [2024-10-15 02:12:16.524338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:07.637 [2024-10-15 02:12:16.524356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.538 ms 00:43:07.637 [2024-10-15 02:12:16.524367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.637 [2024-10-15 02:12:16.524462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.637 [2024-10-15 02:12:16.524480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:43:07.637 [2024-10-15 02:12:16.524493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:43:07.637 [2024-10-15 02:12:16.524503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.637 [2024-10-15 02:12:16.524589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.637 [2024-10-15 02:12:16.524606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:07.637 [2024-10-15 02:12:16.524617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:43:07.638 [2024-10-15 02:12:16.524632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.638 [2024-10-15 02:12:16.524656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.638 [2024-10-15 02:12:16.524668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:07.638 [2024-10-15 02:12:16.524678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:07.638 [2024-10-15 02:12:16.524688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.638 [2024-10-15 02:12:16.524726] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:43:07.638 [2024-10-15 02:12:16.524741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.638 [2024-10-15 02:12:16.524750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:43:07.638 [2024-10-15 02:12:16.524766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:43:07.638 [2024-10-15 02:12:16.524775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.638 [2024-10-15 02:12:16.549241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.638 [2024-10-15 02:12:16.549278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:07.638 [2024-10-15 02:12:16.549293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.440 ms 00:43:07.638 [2024-10-15 02:12:16.549303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.638 [2024-10-15 02:12:16.549379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:07.638 [2024-10-15 02:12:16.549396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:07.638 [2024-10-15 02:12:16.549420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:43:07.638 [2024-10-15 02:12:16.549435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:07.638 [2024-10-15 02:12:16.550981] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.628 ms, result 0 00:43:08.573  [2024-10-15T02:12:18.960Z] Copying: 24/1024 [MB] (24 MBps) [2024-10-15T02:12:19.897Z] Copying: 48/1024 [MB] (24 MBps) [2024-10-15T02:12:20.833Z] Copying: 71/1024 [MB] (23 MBps) [2024-10-15T02:12:21.770Z] Copying: 95/1024 [MB] (23 MBps) [2024-10-15T02:12:22.706Z] Copying: 119/1024 [MB] (24 MBps) [2024-10-15T02:12:23.642Z] Copying: 142/1024 [MB] (22 MBps) [2024-10-15T02:12:24.578Z] Copying: 166/1024 [MB] (23 MBps) [2024-10-15T02:12:25.954Z] Copying: 190/1024 [MB] (23 MBps) [2024-10-15T02:12:26.891Z] Copying: 214/1024 [MB] (23 MBps) [2024-10-15T02:12:27.827Z] Copying: 237/1024 [MB] (23 MBps) [2024-10-15T02:12:28.761Z] Copying: 260/1024 [MB] (23 MBps) [2024-10-15T02:12:29.698Z] Copying: 284/1024 [MB] (23 MBps) [2024-10-15T02:12:30.635Z] Copying: 308/1024 [MB] (24 MBps) [2024-10-15T02:12:31.572Z] Copying: 332/1024 [MB] (23 MBps) [2024-10-15T02:12:32.949Z] Copying: 355/1024 [MB] (22 MBps) [2024-10-15T02:12:33.918Z] Copying: 378/1024 [MB] (23 MBps) [2024-10-15T02:12:34.855Z] Copying: 402/1024 [MB] (24 MBps) [2024-10-15T02:12:35.790Z] Copying: 427/1024 [MB] (24 MBps) [2024-10-15T02:12:36.726Z] Copying: 451/1024 [MB] (24 MBps) [2024-10-15T02:12:37.661Z] Copying: 475/1024 [MB] (24 MBps) [2024-10-15T02:12:38.609Z] Copying: 499/1024 [MB] (24 MBps) [2024-10-15T02:12:39.983Z] Copying: 523/1024 [MB] (23 MBps) [2024-10-15T02:12:40.919Z] Copying: 547/1024 [MB] (23 MBps) [2024-10-15T02:12:41.853Z] Copying: 571/1024 [MB] (24 MBps) [2024-10-15T02:12:42.788Z] Copying: 595/1024 [MB] (23 MBps) [2024-10-15T02:12:43.724Z] Copying: 618/1024 [MB] (23 MBps) [2024-10-15T02:12:44.661Z] Copying: 641/1024 [MB] (23 MBps) [2024-10-15T02:12:45.597Z] Copying: 665/1024 [MB] (23 MBps) [2024-10-15T02:12:46.975Z] Copying: 689/1024 [MB] (24 MBps) [2024-10-15T02:12:47.909Z] Copying: 714/1024 [MB] (24 MBps) [2024-10-15T02:12:48.842Z] Copying: 738/1024 [MB] (24 MBps) [2024-10-15T02:12:49.777Z] Copying: 762/1024 [MB] (24 MBps) [2024-10-15T02:12:50.711Z] Copying: 787/1024 [MB] (24 MBps) [2024-10-15T02:12:51.647Z] Copying: 810/1024 [MB] (23 MBps) [2024-10-15T02:12:52.582Z] Copying: 834/1024 [MB] (23 MBps) [2024-10-15T02:12:53.962Z] Copying: 858/1024 [MB] (23 MBps) [2024-10-15T02:12:54.897Z] Copying: 881/1024 [MB] (23 MBps) [2024-10-15T02:12:55.833Z] Copying: 905/1024 [MB] (23 MBps) [2024-10-15T02:12:56.766Z] Copying: 929/1024 [MB] (24 MBps) [2024-10-15T02:12:57.700Z] Copying: 954/1024 [MB] (24 MBps) [2024-10-15T02:12:58.634Z] Copying: 978/1024 [MB] (24 MBps) [2024-10-15T02:12:59.570Z] Copying: 1003/1024 [MB] (24 MBps) [2024-10-15T02:13:00.506Z] Copying: 1023/1024 [MB] (19 MBps) [2024-10-15T02:13:00.506Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-10-15 02:13:00.465967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:51.494 [2024-10-15 02:13:00.466078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:51.494 [2024-10-15 02:13:00.466130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:51.494 [2024-10-15 02:13:00.466142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:51.494 [2024-10-15 02:13:00.468546] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:51.494 [2024-10-15 02:13:00.474229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:51.494 [2024-10-15 02:13:00.474394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:51.494 [2024-10-15 02:13:00.474458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.393 ms 00:43:51.494 [2024-10-15 02:13:00.474472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:51.494 [2024-10-15 02:13:00.485906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:51.494 [2024-10-15 02:13:00.485942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:51.494 [2024-10-15 02:13:00.485957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.241 ms 00:43:51.494 [2024-10-15 02:13:00.485968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:51.752 [2024-10-15 02:13:00.507364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:51.752 [2024-10-15 02:13:00.507442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:51.752 [2024-10-15 02:13:00.507476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.378 ms 00:43:51.752 [2024-10-15 02:13:00.507488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:51.752 [2024-10-15 02:13:00.513021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:51.752 [2024-10-15 02:13:00.513048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:51.752 [2024-10-15 02:13:00.513060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.487 ms 00:43:51.752 [2024-10-15 02:13:00.513070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:51.752 [2024-10-15 02:13:00.539146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:51.752 [2024-10-15 02:13:00.539195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:51.752 [2024-10-15 02:13:00.539211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.022 ms 00:43:51.752 [2024-10-15 02:13:00.539221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:51.752 [2024-10-15 02:13:00.554094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:51.753 [2024-10-15 02:13:00.554128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:51.753 [2024-10-15 02:13:00.554142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.837 ms 00:43:51.753 [2024-10-15 02:13:00.554153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:51.753 [2024-10-15 02:13:00.667044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:51.753 [2024-10-15 02:13:00.667115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:51.753 [2024-10-15 02:13:00.667134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.853 ms 00:43:51.753 [2024-10-15 02:13:00.667144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:51.753 [2024-10-15 02:13:00.691845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:51.753 [2024-10-15 02:13:00.691878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:51.753 [2024-10-15 02:13:00.691891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.681 ms 00:43:51.753 [2024-10-15 02:13:00.691901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:51.753 [2024-10-15 02:13:00.715996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:51.753 [2024-10-15 02:13:00.716029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:51.753 [2024-10-15 02:13:00.716042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.060 ms 00:43:51.753 [2024-10-15 02:13:00.716051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:51.753 [2024-10-15 02:13:00.739859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:51.753 [2024-10-15 02:13:00.739891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:51.753 [2024-10-15 02:13:00.739904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.759 ms 00:43:51.753 [2024-10-15 02:13:00.739913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.013 [2024-10-15 02:13:00.764447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:52.013 [2024-10-15 02:13:00.764503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:52.013 [2024-10-15 02:13:00.764533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.476 ms 00:43:52.013 [2024-10-15 02:13:00.764543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.013 [2024-10-15 02:13:00.764596] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:52.013 [2024-10-15 02:13:00.764616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121344 / 261120 wr_cnt: 1 state: open 00:43:52.013 [2024-10-15 02:13:00.764628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:52.013 [2024-10-15 02:13:00.764963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.764974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:52.014 [2024-10-15 02:13:00.765820] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:52.014 [2024-10-15 02:13:00.765836] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ef74bb9-2e50-4b4e-aca6-8d1079fe565a 00:43:52.014 [2024-10-15 02:13:00.765847] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121344 00:43:52.014 [2024-10-15 02:13:00.765857] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122304 00:43:52.014 [2024-10-15 02:13:00.765882] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121344 00:43:52.014 [2024-10-15 02:13:00.765894] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:43:52.014 [2024-10-15 02:13:00.765904] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:52.014 [2024-10-15 02:13:00.765914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:52.014 [2024-10-15 02:13:00.765925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:52.014 [2024-10-15 02:13:00.765934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:52.014 [2024-10-15 02:13:00.765943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:52.014 [2024-10-15 02:13:00.765964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:52.014 [2024-10-15 02:13:00.765975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:52.014 [2024-10-15 02:13:00.765987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:43:52.014 [2024-10-15 02:13:00.765996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.014 [2024-10-15 02:13:00.780302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:52.014 [2024-10-15 02:13:00.780331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:52.014 [2024-10-15 02:13:00.780344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.263 ms 00:43:52.014 [2024-10-15 02:13:00.780354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.014 [2024-10-15 02:13:00.780847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:52.014 [2024-10-15 02:13:00.780869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:52.014 [2024-10-15 02:13:00.780889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:43:52.014 [2024-10-15 02:13:00.780899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.014 [2024-10-15 02:13:00.811875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.014 [2024-10-15 02:13:00.811911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:52.014 [2024-10-15 02:13:00.811924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.014 [2024-10-15 02:13:00.811933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.014 [2024-10-15 02:13:00.811986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.014 [2024-10-15 02:13:00.811999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:52.015 [2024-10-15 02:13:00.812015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.015 [2024-10-15 02:13:00.812024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.015 [2024-10-15 02:13:00.812102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.015 [2024-10-15 02:13:00.812118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:52.015 [2024-10-15 02:13:00.812129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.015 [2024-10-15 02:13:00.812139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.015 [2024-10-15 02:13:00.812156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.015 [2024-10-15 02:13:00.812167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:52.015 [2024-10-15 02:13:00.812177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.015 [2024-10-15 02:13:00.812191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.015 [2024-10-15 02:13:00.895553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.015 [2024-10-15 02:13:00.895606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:52.015 [2024-10-15 02:13:00.895621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.015 [2024-10-15 02:13:00.895631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.015 [2024-10-15 02:13:00.964161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.015 [2024-10-15 02:13:00.964211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:52.015 [2024-10-15 02:13:00.964233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.015 [2024-10-15 02:13:00.964243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.015 [2024-10-15 02:13:00.964310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.015 [2024-10-15 02:13:00.964325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:52.015 [2024-10-15 02:13:00.964336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.015 [2024-10-15 02:13:00.964346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.015 [2024-10-15 02:13:00.964430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.015 [2024-10-15 02:13:00.964463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:52.015 [2024-10-15 02:13:00.964474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.015 [2024-10-15 02:13:00.964484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.015 [2024-10-15 02:13:00.964600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.015 [2024-10-15 02:13:00.964619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:52.015 [2024-10-15 02:13:00.964630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.015 [2024-10-15 02:13:00.964640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.015 [2024-10-15 02:13:00.964680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.015 [2024-10-15 02:13:00.964695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:52.015 [2024-10-15 02:13:00.964706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.015 [2024-10-15 02:13:00.964716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.015 [2024-10-15 02:13:00.964764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.015 [2024-10-15 02:13:00.964777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:52.015 [2024-10-15 02:13:00.964788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.015 [2024-10-15 02:13:00.964798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.015 [2024-10-15 02:13:00.964858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:52.015 [2024-10-15 02:13:00.964872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:52.015 [2024-10-15 02:13:00.964882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:52.015 [2024-10-15 02:13:00.964892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:52.015 [2024-10-15 02:13:00.965019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 502.368 ms, result 0 00:43:52.015 [2024-10-15 02:13:00.966867] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:43:52.015 [2024-10-15 02:13:00.970061] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:43:53.916 00:43:53.916 00:43:53.916 02:13:02 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:43:53.916 [2024-10-15 02:13:02.633028] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:43:53.916 [2024-10-15 02:13:02.633201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78519 ] 00:43:53.916 [2024-10-15 02:13:02.804899] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:54.175 [2024-10-15 02:13:02.999161] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:43:54.433 [2024-10-15 02:13:03.297858] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:54.433 [2024-10-15 02:13:03.297923] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:54.433 [2024-10-15 02:13:03.443452] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:43:54.693 [2024-10-15 02:13:03.455896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.455932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:54.693 [2024-10-15 02:13:03.455950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:54.693 [2024-10-15 02:13:03.455965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.456020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.456037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:54.693 [2024-10-15 02:13:03.456047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:43:54.693 [2024-10-15 02:13:03.456057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.456083] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:54.693 [2024-10-15 02:13:03.456867] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:54.693 [2024-10-15 02:13:03.456896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.456909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:54.693 [2024-10-15 02:13:03.456921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:43:54.693 [2024-10-15 02:13:03.456930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.458878] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:43:54.693 [2024-10-15 02:13:03.472463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.472498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:43:54.693 [2024-10-15 02:13:03.472514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.587 ms 00:43:54.693 [2024-10-15 02:13:03.472524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.472583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.472599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:43:54.693 [2024-10-15 02:13:03.472610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:43:54.693 [2024-10-15 02:13:03.472619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.480837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.480870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:54.693 [2024-10-15 02:13:03.480883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.152 ms 00:43:54.693 [2024-10-15 02:13:03.480893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.480977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.480992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:54.693 [2024-10-15 02:13:03.481003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:43:54.693 [2024-10-15 02:13:03.481013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.481084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.481100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:54.693 [2024-10-15 02:13:03.481111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:54.693 [2024-10-15 02:13:03.481120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.481150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:54.693 [2024-10-15 02:13:03.485394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.485432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:54.693 [2024-10-15 02:13:03.485445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.252 ms 00:43:54.693 [2024-10-15 02:13:03.485455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.485487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.485500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:54.693 [2024-10-15 02:13:03.485511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:54.693 [2024-10-15 02:13:03.485527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.485567] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:43:54.693 [2024-10-15 02:13:03.485597] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:43:54.693 [2024-10-15 02:13:03.485633] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:43:54.693 [2024-10-15 02:13:03.485649] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:43:54.693 [2024-10-15 02:13:03.485789] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:54.693 [2024-10-15 02:13:03.485803] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:54.693 [2024-10-15 02:13:03.485821] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:54.693 [2024-10-15 02:13:03.485835] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:54.693 [2024-10-15 02:13:03.485847] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:54.693 [2024-10-15 02:13:03.485858] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:43:54.693 [2024-10-15 02:13:03.485868] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:54.693 [2024-10-15 02:13:03.485878] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:54.693 [2024-10-15 02:13:03.485888] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:54.693 [2024-10-15 02:13:03.485899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.485908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:54.693 [2024-10-15 02:13:03.485919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:43:54.693 [2024-10-15 02:13:03.485929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.486013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.693 [2024-10-15 02:13:03.486026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:54.693 [2024-10-15 02:13:03.486037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:43:54.693 [2024-10-15 02:13:03.486047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.693 [2024-10-15 02:13:03.486154] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:54.693 [2024-10-15 02:13:03.486178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:54.693 [2024-10-15 02:13:03.486191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:54.694 [2024-10-15 02:13:03.486202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:54.694 [2024-10-15 02:13:03.486224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:43:54.694 [2024-10-15 02:13:03.486245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:54.694 [2024-10-15 02:13:03.486255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:54.694 [2024-10-15 02:13:03.486274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:54.694 [2024-10-15 02:13:03.486285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:43:54.694 [2024-10-15 02:13:03.486306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:54.694 [2024-10-15 02:13:03.486316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:54.694 [2024-10-15 02:13:03.486326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:43:54.694 [2024-10-15 02:13:03.486336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:54.694 [2024-10-15 02:13:03.486356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:43:54.694 [2024-10-15 02:13:03.486366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:54.694 [2024-10-15 02:13:03.486385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:54.694 [2024-10-15 02:13:03.486418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:54.694 [2024-10-15 02:13:03.486431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:54.694 [2024-10-15 02:13:03.486451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:54.694 [2024-10-15 02:13:03.486461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:54.694 [2024-10-15 02:13:03.486481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:54.694 [2024-10-15 02:13:03.486491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:54.694 [2024-10-15 02:13:03.486511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:54.694 [2024-10-15 02:13:03.486521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:54.694 [2024-10-15 02:13:03.486569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:54.694 [2024-10-15 02:13:03.486579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:43:54.694 [2024-10-15 02:13:03.486590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:54.694 [2024-10-15 02:13:03.486603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:54.694 [2024-10-15 02:13:03.486614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:43:54.694 [2024-10-15 02:13:03.486624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:54.694 [2024-10-15 02:13:03.486645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:43:54.694 [2024-10-15 02:13:03.486654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486665] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:54.694 [2024-10-15 02:13:03.486696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:54.694 [2024-10-15 02:13:03.486706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:54.694 [2024-10-15 02:13:03.486717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.694 [2024-10-15 02:13:03.486728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:54.694 [2024-10-15 02:13:03.486738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:54.694 [2024-10-15 02:13:03.486748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:54.694 [2024-10-15 02:13:03.486758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:54.694 [2024-10-15 02:13:03.486768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:54.694 [2024-10-15 02:13:03.486778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:54.694 [2024-10-15 02:13:03.486790] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:54.694 [2024-10-15 02:13:03.486803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:54.694 [2024-10-15 02:13:03.486815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:43:54.694 [2024-10-15 02:13:03.486826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:43:54.694 [2024-10-15 02:13:03.486836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:43:54.694 [2024-10-15 02:13:03.486846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:43:54.694 [2024-10-15 02:13:03.486857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:43:54.694 [2024-10-15 02:13:03.486867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:43:54.694 [2024-10-15 02:13:03.486877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:43:54.694 [2024-10-15 02:13:03.486888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:43:54.694 [2024-10-15 02:13:03.486898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:43:54.694 [2024-10-15 02:13:03.486908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:43:54.694 [2024-10-15 02:13:03.486919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:43:54.694 [2024-10-15 02:13:03.486929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:43:54.694 [2024-10-15 02:13:03.486970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:43:54.694 [2024-10-15 02:13:03.486981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:43:54.694 [2024-10-15 02:13:03.486992] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:54.694 [2024-10-15 02:13:03.487003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:54.694 [2024-10-15 02:13:03.487014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:54.694 [2024-10-15 02:13:03.487024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:54.694 [2024-10-15 02:13:03.487034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:54.694 [2024-10-15 02:13:03.487044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:54.694 [2024-10-15 02:13:03.487055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.694 [2024-10-15 02:13:03.487065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:54.694 [2024-10-15 02:13:03.487076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:43:54.694 [2024-10-15 02:13:03.487086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.694 [2024-10-15 02:13:03.526314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.694 [2024-10-15 02:13:03.526379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:54.694 [2024-10-15 02:13:03.526397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.166 ms 00:43:54.694 [2024-10-15 02:13:03.526429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.694 [2024-10-15 02:13:03.526575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.694 [2024-10-15 02:13:03.526594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:54.694 [2024-10-15 02:13:03.526607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:43:54.694 [2024-10-15 02:13:03.526618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.694 [2024-10-15 02:13:03.563412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.694 [2024-10-15 02:13:03.563456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:54.694 [2024-10-15 02:13:03.563472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.700 ms 00:43:54.694 [2024-10-15 02:13:03.563482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.694 [2024-10-15 02:13:03.563531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.694 [2024-10-15 02:13:03.563546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:54.694 [2024-10-15 02:13:03.563558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:43:54.694 [2024-10-15 02:13:03.563567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.694 [2024-10-15 02:13:03.564207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.694 [2024-10-15 02:13:03.564237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:54.694 [2024-10-15 02:13:03.564251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:43:54.694 [2024-10-15 02:13:03.564262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.694 [2024-10-15 02:13:03.564447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.694 [2024-10-15 02:13:03.564482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:54.694 [2024-10-15 02:13:03.564494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:43:54.694 [2024-10-15 02:13:03.564504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.694 [2024-10-15 02:13:03.579795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.694 [2024-10-15 02:13:03.579828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:54.694 [2024-10-15 02:13:03.579843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.265 ms 00:43:54.694 [2024-10-15 02:13:03.579853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.694 [2024-10-15 02:13:03.593533] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:43:54.695 [2024-10-15 02:13:03.593587] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:43:54.695 [2024-10-15 02:13:03.593608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.695 [2024-10-15 02:13:03.593619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:43:54.695 [2024-10-15 02:13:03.593630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.633 ms 00:43:54.695 [2024-10-15 02:13:03.593639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.695 [2024-10-15 02:13:03.617235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.695 [2024-10-15 02:13:03.617270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:43:54.695 [2024-10-15 02:13:03.617284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.556 ms 00:43:54.695 [2024-10-15 02:13:03.617294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.695 [2024-10-15 02:13:03.629666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.695 [2024-10-15 02:13:03.629700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:43:54.695 [2024-10-15 02:13:03.629714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.323 ms 00:43:54.695 [2024-10-15 02:13:03.629723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.695 [2024-10-15 02:13:03.642494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.695 [2024-10-15 02:13:03.642551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:43:54.695 [2024-10-15 02:13:03.642584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.714 ms 00:43:54.695 [2024-10-15 02:13:03.642594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.695 [2024-10-15 02:13:03.643406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.695 [2024-10-15 02:13:03.643491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:54.695 [2024-10-15 02:13:03.643506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:43:54.695 [2024-10-15 02:13:03.643517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.953 [2024-10-15 02:13:03.711500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.954 [2024-10-15 02:13:03.711563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:43:54.954 [2024-10-15 02:13:03.711582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.960 ms 00:43:54.954 [2024-10-15 02:13:03.711594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.954 [2024-10-15 02:13:03.721319] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:43:54.954 [2024-10-15 02:13:03.723351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.954 [2024-10-15 02:13:03.723382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:54.954 [2024-10-15 02:13:03.723396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.704 ms 00:43:54.954 [2024-10-15 02:13:03.723416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.954 [2024-10-15 02:13:03.723505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.954 [2024-10-15 02:13:03.723523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:43:54.954 [2024-10-15 02:13:03.723537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:43:54.954 [2024-10-15 02:13:03.723547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.954 [2024-10-15 02:13:03.725345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.954 [2024-10-15 02:13:03.725376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:54.954 [2024-10-15 02:13:03.725388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:43:54.954 [2024-10-15 02:13:03.725412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.954 [2024-10-15 02:13:03.725440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.954 [2024-10-15 02:13:03.725452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:54.954 [2024-10-15 02:13:03.725464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:43:54.954 [2024-10-15 02:13:03.725473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.954 [2024-10-15 02:13:03.725513] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:43:54.954 [2024-10-15 02:13:03.725527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.954 [2024-10-15 02:13:03.725541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:43:54.954 [2024-10-15 02:13:03.725550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:43:54.954 [2024-10-15 02:13:03.725563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.954 [2024-10-15 02:13:03.750343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.954 [2024-10-15 02:13:03.750377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:54.954 [2024-10-15 02:13:03.750392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.759 ms 00:43:54.954 [2024-10-15 02:13:03.750413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.954 [2024-10-15 02:13:03.750494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.954 [2024-10-15 02:13:03.750511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:54.954 [2024-10-15 02:13:03.750526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:43:54.954 [2024-10-15 02:13:03.750546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.954 [2024-10-15 02:13:03.752030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.494 ms, result 0 00:43:56.329  [2024-10-15T02:13:06.276Z] Copying: 20/1024 [MB] (20 MBps) [2024-10-15T02:13:07.212Z] Copying: 43/1024 [MB] (22 MBps) [2024-10-15T02:13:08.150Z] Copying: 66/1024 [MB] (22 MBps) [2024-10-15T02:13:09.085Z] Copying: 89/1024 [MB] (23 MBps) [2024-10-15T02:13:10.085Z] Copying: 112/1024 [MB] (22 MBps) [2024-10-15T02:13:11.020Z] Copying: 135/1024 [MB] (22 MBps) [2024-10-15T02:13:11.956Z] Copying: 158/1024 [MB] (23 MBps) [2024-10-15T02:13:13.364Z] Copying: 181/1024 [MB] (23 MBps) [2024-10-15T02:13:13.948Z] Copying: 205/1024 [MB] (23 MBps) [2024-10-15T02:13:15.327Z] Copying: 228/1024 [MB] (23 MBps) [2024-10-15T02:13:16.262Z] Copying: 251/1024 [MB] (23 MBps) [2024-10-15T02:13:17.199Z] Copying: 275/1024 [MB] (23 MBps) [2024-10-15T02:13:18.134Z] Copying: 298/1024 [MB] (23 MBps) [2024-10-15T02:13:19.070Z] Copying: 322/1024 [MB] (23 MBps) [2024-10-15T02:13:20.007Z] Copying: 346/1024 [MB] (24 MBps) [2024-10-15T02:13:20.941Z] Copying: 369/1024 [MB] (23 MBps) [2024-10-15T02:13:22.318Z] Copying: 392/1024 [MB] (23 MBps) [2024-10-15T02:13:23.255Z] Copying: 416/1024 [MB] (23 MBps) [2024-10-15T02:13:24.192Z] Copying: 440/1024 [MB] (24 MBps) [2024-10-15T02:13:25.129Z] Copying: 464/1024 [MB] (24 MBps) [2024-10-15T02:13:26.065Z] Copying: 488/1024 [MB] (24 MBps) [2024-10-15T02:13:27.003Z] Copying: 512/1024 [MB] (23 MBps) [2024-10-15T02:13:27.940Z] Copying: 536/1024 [MB] (23 MBps) [2024-10-15T02:13:29.321Z] Copying: 558/1024 [MB] (22 MBps) [2024-10-15T02:13:30.257Z] Copying: 582/1024 [MB] (24 MBps) [2024-10-15T02:13:31.194Z] Copying: 606/1024 [MB] (23 MBps) [2024-10-15T02:13:32.130Z] Copying: 630/1024 [MB] (23 MBps) [2024-10-15T02:13:33.069Z] Copying: 653/1024 [MB] (23 MBps) [2024-10-15T02:13:34.035Z] Copying: 677/1024 [MB] (23 MBps) [2024-10-15T02:13:34.973Z] Copying: 701/1024 [MB] (23 MBps) [2024-10-15T02:13:36.351Z] Copying: 724/1024 [MB] (23 MBps) [2024-10-15T02:13:37.288Z] Copying: 747/1024 [MB] (23 MBps) [2024-10-15T02:13:38.225Z] Copying: 771/1024 [MB] (23 MBps) [2024-10-15T02:13:39.163Z] Copying: 794/1024 [MB] (23 MBps) [2024-10-15T02:13:40.100Z] Copying: 818/1024 [MB] (23 MBps) [2024-10-15T02:13:41.037Z] Copying: 841/1024 [MB] (23 MBps) [2024-10-15T02:13:41.973Z] Copying: 865/1024 [MB] (23 MBps) [2024-10-15T02:13:43.351Z] Copying: 888/1024 [MB] (23 MBps) [2024-10-15T02:13:44.287Z] Copying: 912/1024 [MB] (23 MBps) [2024-10-15T02:13:45.224Z] Copying: 935/1024 [MB] (23 MBps) [2024-10-15T02:13:46.159Z] Copying: 958/1024 [MB] (23 MBps) [2024-10-15T02:13:47.095Z] Copying: 981/1024 [MB] (22 MBps) [2024-10-15T02:13:48.031Z] Copying: 1005/1024 [MB] (23 MBps) [2024-10-15T02:13:48.289Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-10-15 02:13:48.120690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.278 [2024-10-15 02:13:48.120787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:39.278 [2024-10-15 02:13:48.120810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:39.278 [2024-10-15 02:13:48.120825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.278 [2024-10-15 02:13:48.120859] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:39.278 [2024-10-15 02:13:48.125120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.278 [2024-10-15 02:13:48.125170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:39.278 [2024-10-15 02:13:48.125191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.237 ms 00:44:39.278 [2024-10-15 02:13:48.125203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.278 [2024-10-15 02:13:48.125490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.278 [2024-10-15 02:13:48.125521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:39.278 [2024-10-15 02:13:48.125535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:44:39.278 [2024-10-15 02:13:48.125548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.278 [2024-10-15 02:13:48.130322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.278 [2024-10-15 02:13:48.130363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:39.278 [2024-10-15 02:13:48.130379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.753 ms 00:44:39.278 [2024-10-15 02:13:48.130423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.278 [2024-10-15 02:13:48.137230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.278 [2024-10-15 02:13:48.137282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:39.278 [2024-10-15 02:13:48.137297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.724 ms 00:44:39.278 [2024-10-15 02:13:48.137308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.278 [2024-10-15 02:13:48.164147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.278 [2024-10-15 02:13:48.164182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:39.278 [2024-10-15 02:13:48.164197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.759 ms 00:44:39.278 [2024-10-15 02:13:48.164208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.278 [2024-10-15 02:13:48.178562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.278 [2024-10-15 02:13:48.178615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:39.278 [2024-10-15 02:13:48.178631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.314 ms 00:44:39.278 [2024-10-15 02:13:48.178642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.538 [2024-10-15 02:13:48.291334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.538 [2024-10-15 02:13:48.291398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:39.538 [2024-10-15 02:13:48.291457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.642 ms 00:44:39.538 [2024-10-15 02:13:48.291472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.538 [2024-10-15 02:13:48.316664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.538 [2024-10-15 02:13:48.316713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:39.538 [2024-10-15 02:13:48.316728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.171 ms 00:44:39.538 [2024-10-15 02:13:48.316738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.538 [2024-10-15 02:13:48.341307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.538 [2024-10-15 02:13:48.341341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:39.538 [2024-10-15 02:13:48.341356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.528 ms 00:44:39.538 [2024-10-15 02:13:48.341365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.538 [2024-10-15 02:13:48.365421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.538 [2024-10-15 02:13:48.365456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:39.538 [2024-10-15 02:13:48.365471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.017 ms 00:44:39.538 [2024-10-15 02:13:48.365480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.538 [2024-10-15 02:13:48.389438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.538 [2024-10-15 02:13:48.389471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:39.538 [2024-10-15 02:13:48.389486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.894 ms 00:44:39.538 [2024-10-15 02:13:48.389495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.538 [2024-10-15 02:13:48.389533] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:39.538 [2024-10-15 02:13:48.389553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:44:39.538 [2024-10-15 02:13:48.389573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:39.538 [2024-10-15 02:13:48.389781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.389993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:39.539 [2024-10-15 02:13:48.390659] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:39.539 [2024-10-15 02:13:48.390670] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ef74bb9-2e50-4b4e-aca6-8d1079fe565a 00:44:39.539 [2024-10-15 02:13:48.390681] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:44:39.539 [2024-10-15 02:13:48.390691] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 10688 00:44:39.540 [2024-10-15 02:13:48.390701] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 9728 00:44:39.540 [2024-10-15 02:13:48.390714] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0987 00:44:39.540 [2024-10-15 02:13:48.390727] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:39.540 [2024-10-15 02:13:48.390738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:39.540 [2024-10-15 02:13:48.390748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:39.540 [2024-10-15 02:13:48.390757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:39.540 [2024-10-15 02:13:48.390767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:39.540 [2024-10-15 02:13:48.390777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.540 [2024-10-15 02:13:48.390801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:39.540 [2024-10-15 02:13:48.390813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:44:39.540 [2024-10-15 02:13:48.390827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.540 [2024-10-15 02:13:48.404915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.540 [2024-10-15 02:13:48.404945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:39.540 [2024-10-15 02:13:48.404960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.065 ms 00:44:39.540 [2024-10-15 02:13:48.404981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.540 [2024-10-15 02:13:48.405400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:39.540 [2024-10-15 02:13:48.405476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:39.540 [2024-10-15 02:13:48.405492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:44:39.540 [2024-10-15 02:13:48.405508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.540 [2024-10-15 02:13:48.436565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.540 [2024-10-15 02:13:48.436616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:39.540 [2024-10-15 02:13:48.436631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.540 [2024-10-15 02:13:48.436642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.540 [2024-10-15 02:13:48.436694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.540 [2024-10-15 02:13:48.436714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:39.540 [2024-10-15 02:13:48.436725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.540 [2024-10-15 02:13:48.436734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.540 [2024-10-15 02:13:48.436835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.540 [2024-10-15 02:13:48.436853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:39.540 [2024-10-15 02:13:48.436864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.540 [2024-10-15 02:13:48.436883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.540 [2024-10-15 02:13:48.436904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.540 [2024-10-15 02:13:48.436917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:39.540 [2024-10-15 02:13:48.436933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.540 [2024-10-15 02:13:48.436942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.540 [2024-10-15 02:13:48.522996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.540 [2024-10-15 02:13:48.523069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:39.540 [2024-10-15 02:13:48.523087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.540 [2024-10-15 02:13:48.523098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.799 [2024-10-15 02:13:48.594157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.799 [2024-10-15 02:13:48.594220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:39.799 [2024-10-15 02:13:48.594243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.799 [2024-10-15 02:13:48.594253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.799 [2024-10-15 02:13:48.594323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.799 [2024-10-15 02:13:48.594339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:39.799 [2024-10-15 02:13:48.594350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.799 [2024-10-15 02:13:48.594361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.799 [2024-10-15 02:13:48.594447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.799 [2024-10-15 02:13:48.594471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:39.799 [2024-10-15 02:13:48.594486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.799 [2024-10-15 02:13:48.594503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.799 [2024-10-15 02:13:48.594664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.799 [2024-10-15 02:13:48.594684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:39.799 [2024-10-15 02:13:48.594696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.799 [2024-10-15 02:13:48.594706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.799 [2024-10-15 02:13:48.594752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.799 [2024-10-15 02:13:48.594769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:39.799 [2024-10-15 02:13:48.594781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.799 [2024-10-15 02:13:48.594792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.800 [2024-10-15 02:13:48.594844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.800 [2024-10-15 02:13:48.594859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:39.800 [2024-10-15 02:13:48.594885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.800 [2024-10-15 02:13:48.594899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.800 [2024-10-15 02:13:48.594963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:39.800 [2024-10-15 02:13:48.594978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:39.800 [2024-10-15 02:13:48.594989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:39.800 [2024-10-15 02:13:48.595006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:39.800 [2024-10-15 02:13:48.595188] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 474.423 ms, result 0 00:44:39.800 [2024-10-15 02:13:48.596183] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:44:39.800 [2024-10-15 02:13:48.599302] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:44:40.737 00:44:40.737 00:44:40.737 02:13:49 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:44:42.642 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76839 00:44:42.642 02:13:51 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76839 ']' 00:44:42.642 02:13:51 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76839 00:44:42.642 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76839) - No such process 00:44:42.642 Process with pid 76839 is not found 00:44:42.642 02:13:51 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 76839 is not found' 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:44:42.642 Remove shared memory files 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:44:42.642 02:13:51 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:44:42.642 ************************************ 00:44:42.642 END TEST ftl_restore 00:44:42.642 ************************************ 00:44:42.642 00:44:42.642 real 3m32.993s 00:44:42.642 user 3m18.588s 00:44:42.642 sys 0m15.709s 00:44:42.642 02:13:51 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:42.642 02:13:51 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:44:42.642 02:13:51 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:44:42.642 02:13:51 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:44:42.642 02:13:51 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:42.642 02:13:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:44:42.642 ************************************ 00:44:42.642 START TEST ftl_dirty_shutdown 00:44:42.642 ************************************ 00:44:42.642 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:44:42.901 * Looking for test storage... 00:44:42.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:42.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.901 --rc genhtml_branch_coverage=1 00:44:42.901 --rc genhtml_function_coverage=1 00:44:42.901 --rc genhtml_legend=1 00:44:42.901 --rc geninfo_all_blocks=1 00:44:42.901 --rc geninfo_unexecuted_blocks=1 00:44:42.901 00:44:42.901 ' 00:44:42.901 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:42.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.901 --rc genhtml_branch_coverage=1 00:44:42.901 --rc genhtml_function_coverage=1 00:44:42.901 --rc genhtml_legend=1 00:44:42.902 --rc geninfo_all_blocks=1 00:44:42.902 --rc geninfo_unexecuted_blocks=1 00:44:42.902 00:44:42.902 ' 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:42.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.902 --rc genhtml_branch_coverage=1 00:44:42.902 --rc genhtml_function_coverage=1 00:44:42.902 --rc genhtml_legend=1 00:44:42.902 --rc geninfo_all_blocks=1 00:44:42.902 --rc geninfo_unexecuted_blocks=1 00:44:42.902 00:44:42.902 ' 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:42.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.902 --rc genhtml_branch_coverage=1 00:44:42.902 --rc genhtml_function_coverage=1 00:44:42.902 --rc genhtml_legend=1 00:44:42.902 --rc geninfo_all_blocks=1 00:44:42.902 --rc geninfo_unexecuted_blocks=1 00:44:42.902 00:44:42.902 ' 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79074 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79074 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 79074 ']' 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:42.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:42.902 02:13:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:44:43.161 [2024-10-15 02:13:51.967489] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:44:43.161 [2024-10-15 02:13:51.967690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79074 ] 00:44:43.161 [2024-10-15 02:13:52.137645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:43.420 [2024-10-15 02:13:52.336579] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:44:44.380 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:44.380 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:44:44.380 02:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:44:44.380 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:44:44.380 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:44:44.380 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:44:44.380 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:44:44.380 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:44:44.639 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:44:44.639 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:44:44.639 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:44:44.639 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:44:44.639 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:44:44.639 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:44:44.639 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:44:44.639 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:44:44.639 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:44:44.639 { 00:44:44.639 "name": "nvme0n1", 00:44:44.639 "aliases": [ 00:44:44.639 "fc02ca02-a3e8-4aaa-9a67-66317346d441" 00:44:44.639 ], 00:44:44.639 "product_name": "NVMe disk", 00:44:44.639 "block_size": 4096, 00:44:44.639 "num_blocks": 1310720, 00:44:44.639 "uuid": "fc02ca02-a3e8-4aaa-9a67-66317346d441", 00:44:44.639 "numa_id": -1, 00:44:44.639 "assigned_rate_limits": { 00:44:44.639 "rw_ios_per_sec": 0, 00:44:44.639 "rw_mbytes_per_sec": 0, 00:44:44.639 "r_mbytes_per_sec": 0, 00:44:44.639 "w_mbytes_per_sec": 0 00:44:44.639 }, 00:44:44.639 "claimed": true, 00:44:44.639 "claim_type": "read_many_write_one", 00:44:44.639 "zoned": false, 00:44:44.639 "supported_io_types": { 00:44:44.639 "read": true, 00:44:44.639 "write": true, 00:44:44.639 "unmap": true, 00:44:44.639 "flush": true, 00:44:44.639 "reset": true, 00:44:44.639 "nvme_admin": true, 00:44:44.639 "nvme_io": true, 00:44:44.639 "nvme_io_md": false, 00:44:44.639 "write_zeroes": true, 00:44:44.639 "zcopy": false, 00:44:44.639 "get_zone_info": false, 00:44:44.639 "zone_management": false, 00:44:44.639 "zone_append": false, 00:44:44.639 "compare": true, 00:44:44.639 "compare_and_write": false, 00:44:44.639 "abort": true, 00:44:44.639 "seek_hole": false, 00:44:44.639 "seek_data": false, 00:44:44.639 "copy": true, 00:44:44.639 "nvme_iov_md": false 00:44:44.639 }, 00:44:44.639 "driver_specific": { 00:44:44.639 "nvme": [ 00:44:44.639 { 00:44:44.639 "pci_address": "0000:00:11.0", 00:44:44.639 "trid": { 00:44:44.639 "trtype": "PCIe", 00:44:44.639 "traddr": "0000:00:11.0" 00:44:44.639 }, 00:44:44.639 "ctrlr_data": { 00:44:44.639 "cntlid": 0, 00:44:44.639 "vendor_id": "0x1b36", 00:44:44.639 "model_number": "QEMU NVMe Ctrl", 00:44:44.639 "serial_number": "12341", 00:44:44.639 "firmware_revision": "8.0.0", 00:44:44.639 "subnqn": "nqn.2019-08.org.qemu:12341", 00:44:44.639 "oacs": { 00:44:44.639 "security": 0, 00:44:44.639 "format": 1, 00:44:44.640 "firmware": 0, 00:44:44.640 "ns_manage": 1 00:44:44.640 }, 00:44:44.640 "multi_ctrlr": false, 00:44:44.640 "ana_reporting": false 00:44:44.640 }, 00:44:44.640 "vs": { 00:44:44.640 "nvme_version": "1.4" 00:44:44.640 }, 00:44:44.640 "ns_data": { 00:44:44.640 "id": 1, 00:44:44.640 "can_share": false 00:44:44.640 } 00:44:44.640 } 00:44:44.640 ], 00:44:44.640 "mp_policy": "active_passive" 00:44:44.640 } 00:44:44.640 } 00:44:44.640 ]' 00:44:44.640 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:44:44.898 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:44:44.898 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:44:44.898 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:44:44.898 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:44:44.898 02:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:44:44.898 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:44:44.898 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:44:44.898 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:44:44.898 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:44:44.898 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:44:45.157 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=3499ea23-231c-4f33-9c41-cbb45eda0bea 00:44:45.157 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:44:45.157 02:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3499ea23-231c-4f33-9c41-cbb45eda0bea 00:44:45.157 [2024-10-15 02:13:54.132807] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035015720 was disconnected and freed. delete nvme_qpair. 00:44:45.157 02:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:44:45.416 02:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=39e63fba-3b4e-4a96-96fe-1cad4fb6db04 00:44:45.416 02:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 39e63fba-3b4e-4a96-96fe-1cad4fb6db04 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:44:45.675 02:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:45.934 02:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:44:45.934 { 00:44:45.934 "name": "b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b", 00:44:45.934 "aliases": [ 00:44:45.934 "lvs/nvme0n1p0" 00:44:45.934 ], 00:44:45.934 "product_name": "Logical Volume", 00:44:45.934 "block_size": 4096, 00:44:45.934 "num_blocks": 26476544, 00:44:45.934 "uuid": "b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b", 00:44:45.934 "assigned_rate_limits": { 00:44:45.934 "rw_ios_per_sec": 0, 00:44:45.934 "rw_mbytes_per_sec": 0, 00:44:45.934 "r_mbytes_per_sec": 0, 00:44:45.934 "w_mbytes_per_sec": 0 00:44:45.934 }, 00:44:45.934 "claimed": false, 00:44:45.934 "zoned": false, 00:44:45.934 "supported_io_types": { 00:44:45.934 "read": true, 00:44:45.934 "write": true, 00:44:45.934 "unmap": true, 00:44:45.934 "flush": false, 00:44:45.934 "reset": true, 00:44:45.934 "nvme_admin": false, 00:44:45.934 "nvme_io": false, 00:44:45.934 "nvme_io_md": false, 00:44:45.934 "write_zeroes": true, 00:44:45.934 "zcopy": false, 00:44:45.934 "get_zone_info": false, 00:44:45.934 "zone_management": false, 00:44:45.934 "zone_append": false, 00:44:45.934 "compare": false, 00:44:45.934 "compare_and_write": false, 00:44:45.934 "abort": false, 00:44:45.934 "seek_hole": true, 00:44:45.934 "seek_data": true, 00:44:45.934 "copy": false, 00:44:45.934 "nvme_iov_md": false 00:44:45.934 }, 00:44:45.934 "driver_specific": { 00:44:45.934 "lvol": { 00:44:45.934 "lvol_store_uuid": "39e63fba-3b4e-4a96-96fe-1cad4fb6db04", 00:44:45.934 "base_bdev": "nvme0n1", 00:44:45.934 "thin_provision": true, 00:44:45.934 "num_allocated_clusters": 0, 00:44:45.934 "snapshot": false, 00:44:45.934 "clone": false, 00:44:45.934 "esnap_clone": false 00:44:45.934 } 00:44:45.934 } 00:44:45.934 } 00:44:45.934 ]' 00:44:45.934 02:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:44:46.193 02:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:44:46.193 02:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:44:46.193 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:44:46.193 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:44:46.193 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:44:46.193 02:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:44:46.193 02:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:44:46.193 02:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:44:46.451 [2024-10-15 02:13:55.361180] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:44:46.451 02:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:44:46.451 02:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:44:46.451 02:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:46.451 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:46.451 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:44:46.451 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:44:46.451 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:44:46.451 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:46.760 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:44:46.760 { 00:44:46.760 "name": "b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b", 00:44:46.760 "aliases": [ 00:44:46.760 "lvs/nvme0n1p0" 00:44:46.760 ], 00:44:46.760 "product_name": "Logical Volume", 00:44:46.760 "block_size": 4096, 00:44:46.760 "num_blocks": 26476544, 00:44:46.760 "uuid": "b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b", 00:44:46.760 "assigned_rate_limits": { 00:44:46.760 "rw_ios_per_sec": 0, 00:44:46.760 "rw_mbytes_per_sec": 0, 00:44:46.760 "r_mbytes_per_sec": 0, 00:44:46.760 "w_mbytes_per_sec": 0 00:44:46.760 }, 00:44:46.760 "claimed": false, 00:44:46.760 "zoned": false, 00:44:46.760 "supported_io_types": { 00:44:46.760 "read": true, 00:44:46.760 "write": true, 00:44:46.760 "unmap": true, 00:44:46.760 "flush": false, 00:44:46.760 "reset": true, 00:44:46.760 "nvme_admin": false, 00:44:46.760 "nvme_io": false, 00:44:46.760 "nvme_io_md": false, 00:44:46.760 "write_zeroes": true, 00:44:46.760 "zcopy": false, 00:44:46.760 "get_zone_info": false, 00:44:46.760 "zone_management": false, 00:44:46.760 "zone_append": false, 00:44:46.760 "compare": false, 00:44:46.760 "compare_and_write": false, 00:44:46.760 "abort": false, 00:44:46.760 "seek_hole": true, 00:44:46.760 "seek_data": true, 00:44:46.760 "copy": false, 00:44:46.760 "nvme_iov_md": false 00:44:46.760 }, 00:44:46.760 "driver_specific": { 00:44:46.760 "lvol": { 00:44:46.760 "lvol_store_uuid": "39e63fba-3b4e-4a96-96fe-1cad4fb6db04", 00:44:46.760 "base_bdev": "nvme0n1", 00:44:46.760 "thin_provision": true, 00:44:46.760 "num_allocated_clusters": 0, 00:44:46.760 "snapshot": false, 00:44:46.760 "clone": false, 00:44:46.760 "esnap_clone": false 00:44:46.760 } 00:44:46.760 } 00:44:46.760 } 00:44:46.760 ]' 00:44:46.760 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:44:46.760 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:44:46.760 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:44:47.020 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:44:47.020 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:44:47.020 02:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:44:47.020 02:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:44:47.020 02:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:44:47.020 [2024-10-15 02:13:56.012568] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:44:47.020 02:13:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:44:47.020 02:13:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:47.020 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:47.020 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:44:47.020 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:44:47.020 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:44:47.278 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:44:47.536 { 00:44:47.536 "name": "b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b", 00:44:47.536 "aliases": [ 00:44:47.536 "lvs/nvme0n1p0" 00:44:47.536 ], 00:44:47.536 "product_name": "Logical Volume", 00:44:47.536 "block_size": 4096, 00:44:47.536 "num_blocks": 26476544, 00:44:47.536 "uuid": "b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b", 00:44:47.536 "assigned_rate_limits": { 00:44:47.536 "rw_ios_per_sec": 0, 00:44:47.536 "rw_mbytes_per_sec": 0, 00:44:47.536 "r_mbytes_per_sec": 0, 00:44:47.536 "w_mbytes_per_sec": 0 00:44:47.536 }, 00:44:47.536 "claimed": false, 00:44:47.536 "zoned": false, 00:44:47.536 "supported_io_types": { 00:44:47.536 "read": true, 00:44:47.536 "write": true, 00:44:47.536 "unmap": true, 00:44:47.536 "flush": false, 00:44:47.536 "reset": true, 00:44:47.536 "nvme_admin": false, 00:44:47.536 "nvme_io": false, 00:44:47.536 "nvme_io_md": false, 00:44:47.536 "write_zeroes": true, 00:44:47.536 "zcopy": false, 00:44:47.536 "get_zone_info": false, 00:44:47.536 "zone_management": false, 00:44:47.536 "zone_append": false, 00:44:47.536 "compare": false, 00:44:47.536 "compare_and_write": false, 00:44:47.536 "abort": false, 00:44:47.536 "seek_hole": true, 00:44:47.536 "seek_data": true, 00:44:47.536 "copy": false, 00:44:47.536 "nvme_iov_md": false 00:44:47.536 }, 00:44:47.536 "driver_specific": { 00:44:47.536 "lvol": { 00:44:47.536 "lvol_store_uuid": "39e63fba-3b4e-4a96-96fe-1cad4fb6db04", 00:44:47.536 "base_bdev": "nvme0n1", 00:44:47.536 "thin_provision": true, 00:44:47.536 "num_allocated_clusters": 0, 00:44:47.536 "snapshot": false, 00:44:47.536 "clone": false, 00:44:47.536 "esnap_clone": false 00:44:47.536 } 00:44:47.536 } 00:44:47.536 } 00:44:47.536 ]' 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b --l2p_dram_limit 10' 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:44:47.536 02:13:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b0c94d5d-d3bf-4a95-ab25-f0c974bc7c6b --l2p_dram_limit 10 -c nvc0n1p0 00:44:47.796 [2024-10-15 02:13:56.582563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.796 [2024-10-15 02:13:56.582628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:47.796 [2024-10-15 02:13:56.582647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:47.796 [2024-10-15 02:13:56.582660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.796 [2024-10-15 02:13:56.582728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.797 [2024-10-15 02:13:56.582751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:47.797 [2024-10-15 02:13:56.582764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:44:47.797 [2024-10-15 02:13:56.582778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.797 [2024-10-15 02:13:56.582846] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:47.797 [2024-10-15 02:13:56.583768] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:47.797 [2024-10-15 02:13:56.583816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.797 [2024-10-15 02:13:56.583833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:47.797 [2024-10-15 02:13:56.583848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:44:47.797 [2024-10-15 02:13:56.583861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.797 [2024-10-15 02:13:56.583997] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 429ddebc-ad3c-4865-b14d-60695ae1c9ae 00:44:47.797 [2024-10-15 02:13:56.585878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.797 [2024-10-15 02:13:56.585931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:44:47.797 [2024-10-15 02:13:56.585965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:44:47.797 [2024-10-15 02:13:56.585976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.797 [2024-10-15 02:13:56.595338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.797 [2024-10-15 02:13:56.595377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:47.797 [2024-10-15 02:13:56.595411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.302 ms 00:44:47.797 [2024-10-15 02:13:56.595436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.797 [2024-10-15 02:13:56.595556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.797 [2024-10-15 02:13:56.595576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:47.797 [2024-10-15 02:13:56.595592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:44:47.797 [2024-10-15 02:13:56.595605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.797 [2024-10-15 02:13:56.595732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.797 [2024-10-15 02:13:56.595751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:47.797 [2024-10-15 02:13:56.595767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:44:47.797 [2024-10-15 02:13:56.595777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.797 [2024-10-15 02:13:56.595813] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:47.797 [2024-10-15 02:13:56.600253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.797 [2024-10-15 02:13:56.600306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:47.797 [2024-10-15 02:13:56.600338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.453 ms 00:44:47.797 [2024-10-15 02:13:56.600352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.797 [2024-10-15 02:13:56.600391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.797 [2024-10-15 02:13:56.600410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:47.797 [2024-10-15 02:13:56.600439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:47.797 [2024-10-15 02:13:56.600455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.797 [2024-10-15 02:13:56.600499] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:44:47.797 [2024-10-15 02:13:56.600656] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:47.797 [2024-10-15 02:13:56.600673] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:47.797 [2024-10-15 02:13:56.600693] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:47.797 [2024-10-15 02:13:56.600707] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:47.797 [2024-10-15 02:13:56.600724] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:47.797 [2024-10-15 02:13:56.600736] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:47.797 [2024-10-15 02:13:56.600749] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:47.797 [2024-10-15 02:13:56.600770] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:47.797 [2024-10-15 02:13:56.600784] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:47.797 [2024-10-15 02:13:56.600796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.797 [2024-10-15 02:13:56.600809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:47.797 [2024-10-15 02:13:56.600820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:44:47.797 [2024-10-15 02:13:56.600832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.797 [2024-10-15 02:13:56.600919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.797 [2024-10-15 02:13:56.600940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:47.797 [2024-10-15 02:13:56.600952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:44:47.797 [2024-10-15 02:13:56.600964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.797 [2024-10-15 02:13:56.601058] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:47.797 [2024-10-15 02:13:56.601088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:47.797 [2024-10-15 02:13:56.601101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:47.797 [2024-10-15 02:13:56.601114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:47.797 [2024-10-15 02:13:56.601136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:47.797 [2024-10-15 02:13:56.601156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:47.797 [2024-10-15 02:13:56.601166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:47.797 [2024-10-15 02:13:56.601187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:47.797 [2024-10-15 02:13:56.601198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:47.797 [2024-10-15 02:13:56.601207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:47.797 [2024-10-15 02:13:56.601220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:47.797 [2024-10-15 02:13:56.601230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:47.797 [2024-10-15 02:13:56.601241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:47.797 [2024-10-15 02:13:56.601263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:47.797 [2024-10-15 02:13:56.601272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:47.797 [2024-10-15 02:13:56.601293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:47.797 [2024-10-15 02:13:56.601316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:47.797 [2024-10-15 02:13:56.601328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:47.797 [2024-10-15 02:13:56.601348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:47.797 [2024-10-15 02:13:56.601357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:47.797 [2024-10-15 02:13:56.601377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:47.797 [2024-10-15 02:13:56.601391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:47.797 [2024-10-15 02:13:56.601450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:47.797 [2024-10-15 02:13:56.601461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:47.797 [2024-10-15 02:13:56.601482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:47.797 [2024-10-15 02:13:56.601494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:47.797 [2024-10-15 02:13:56.601504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:47.797 [2024-10-15 02:13:56.601516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:47.797 [2024-10-15 02:13:56.601526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:47.797 [2024-10-15 02:13:56.601537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:47.797 [2024-10-15 02:13:56.601559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:47.797 [2024-10-15 02:13:56.601568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601579] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:47.797 [2024-10-15 02:13:56.601592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:47.797 [2024-10-15 02:13:56.601608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:47.797 [2024-10-15 02:13:56.601619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:47.797 [2024-10-15 02:13:56.601632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:47.797 [2024-10-15 02:13:56.601642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:47.797 [2024-10-15 02:13:56.601655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:47.797 [2024-10-15 02:13:56.601665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:47.797 [2024-10-15 02:13:56.601691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:47.797 [2024-10-15 02:13:56.601702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:47.797 [2024-10-15 02:13:56.601718] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:47.797 [2024-10-15 02:13:56.601748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:47.798 [2024-10-15 02:13:56.601761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:47.798 [2024-10-15 02:13:56.601772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:47.798 [2024-10-15 02:13:56.601785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:47.798 [2024-10-15 02:13:56.601795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:47.798 [2024-10-15 02:13:56.601807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:47.798 [2024-10-15 02:13:56.601817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:47.798 [2024-10-15 02:13:56.601831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:47.798 [2024-10-15 02:13:56.601842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:47.798 [2024-10-15 02:13:56.601854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:47.798 [2024-10-15 02:13:56.601864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:47.798 [2024-10-15 02:13:56.601877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:47.798 [2024-10-15 02:13:56.601887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:47.798 [2024-10-15 02:13:56.601898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:47.798 [2024-10-15 02:13:56.601909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:47.798 [2024-10-15 02:13:56.601921] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:47.798 [2024-10-15 02:13:56.601933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:47.798 [2024-10-15 02:13:56.601948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:47.798 [2024-10-15 02:13:56.601959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:47.798 [2024-10-15 02:13:56.601972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:47.798 [2024-10-15 02:13:56.601982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:47.798 [2024-10-15 02:13:56.601996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:47.798 [2024-10-15 02:13:56.602007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:47.798 [2024-10-15 02:13:56.602023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:44:47.798 [2024-10-15 02:13:56.602034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:47.798 [2024-10-15 02:13:56.602094] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:44:47.798 [2024-10-15 02:13:56.602111] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:44:51.083 [2024-10-15 02:13:59.872376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.083 [2024-10-15 02:13:59.872481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:44:51.083 [2024-10-15 02:13:59.872513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3270.296 ms 00:44:51.083 [2024-10-15 02:13:59.872530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.083 [2024-10-15 02:13:59.918497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.083 [2024-10-15 02:13:59.918574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:51.083 [2024-10-15 02:13:59.918603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.572 ms 00:44:51.083 [2024-10-15 02:13:59.918623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.083 [2024-10-15 02:13:59.918868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.083 [2024-10-15 02:13:59.918909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:51.083 [2024-10-15 02:13:59.918931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:44:51.083 [2024-10-15 02:13:59.918952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.083 [2024-10-15 02:13:59.987733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.083 [2024-10-15 02:13:59.987825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:51.083 [2024-10-15 02:13:59.987871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.697 ms 00:44:51.083 [2024-10-15 02:13:59.987897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.083 [2024-10-15 02:13:59.988002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.083 [2024-10-15 02:13:59.988038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:51.083 [2024-10-15 02:13:59.988070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:44:51.083 [2024-10-15 02:13:59.988093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.083 [2024-10-15 02:13:59.988998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.083 [2024-10-15 02:13:59.989059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:51.083 [2024-10-15 02:13:59.989100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:44:51.083 [2024-10-15 02:13:59.989123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.083 [2024-10-15 02:13:59.989462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.083 [2024-10-15 02:13:59.989515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:51.083 [2024-10-15 02:13:59.989547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:44:51.083 [2024-10-15 02:13:59.989570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.083 [2024-10-15 02:14:00.013928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.083 [2024-10-15 02:14:00.013981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:51.083 [2024-10-15 02:14:00.014012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.303 ms 00:44:51.083 [2024-10-15 02:14:00.014030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.083 [2024-10-15 02:14:00.031805] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:44:51.083 [2024-10-15 02:14:00.036602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.083 [2024-10-15 02:14:00.036652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:51.083 [2024-10-15 02:14:00.036677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.451 ms 00:44:51.083 [2024-10-15 02:14:00.036695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.124310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.342 [2024-10-15 02:14:00.124429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:44:51.342 [2024-10-15 02:14:00.124462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.565 ms 00:44:51.342 [2024-10-15 02:14:00.124484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.124765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.342 [2024-10-15 02:14:00.124810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:51.342 [2024-10-15 02:14:00.124828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:44:51.342 [2024-10-15 02:14:00.124846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.163644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.342 [2024-10-15 02:14:00.163709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:44:51.342 [2024-10-15 02:14:00.163732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.710 ms 00:44:51.342 [2024-10-15 02:14:00.163751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.188879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.342 [2024-10-15 02:14:00.188922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:44:51.342 [2024-10-15 02:14:00.188954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.068 ms 00:44:51.342 [2024-10-15 02:14:00.188966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.189827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.342 [2024-10-15 02:14:00.189862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:51.342 [2024-10-15 02:14:00.189892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:44:51.342 [2024-10-15 02:14:00.189907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.268152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.342 [2024-10-15 02:14:00.268227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:44:51.342 [2024-10-15 02:14:00.268247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.189 ms 00:44:51.342 [2024-10-15 02:14:00.268261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.294181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.342 [2024-10-15 02:14:00.294223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:44:51.342 [2024-10-15 02:14:00.294256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.833 ms 00:44:51.342 [2024-10-15 02:14:00.294268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.318556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.342 [2024-10-15 02:14:00.318596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:44:51.342 [2024-10-15 02:14:00.318626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.244 ms 00:44:51.342 [2024-10-15 02:14:00.318638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.343234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.342 [2024-10-15 02:14:00.343286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:51.342 [2024-10-15 02:14:00.343318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.556 ms 00:44:51.342 [2024-10-15 02:14:00.343332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.343380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.342 [2024-10-15 02:14:00.343401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:51.342 [2024-10-15 02:14:00.343430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:51.342 [2024-10-15 02:14:00.343443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.343567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:51.342 [2024-10-15 02:14:00.343588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:51.342 [2024-10-15 02:14:00.343600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:44:51.342 [2024-10-15 02:14:00.343613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:51.342 [2024-10-15 02:14:00.344968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3761.895 ms, result 0 00:44:51.342 { 00:44:51.342 "name": "ftl0", 00:44:51.342 "uuid": "429ddebc-ad3c-4865-b14d-60695ae1c9ae" 00:44:51.342 } 00:44:51.601 02:14:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:44:51.601 02:14:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:44:51.860 02:14:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:44:51.860 02:14:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:44:51.860 02:14:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:44:52.118 /dev/nbd0 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:44:52.118 1+0 records in 00:44:52.118 1+0 records out 00:44:52.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035899 s, 11.4 MB/s 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:44:52.118 02:14:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:44:52.118 [2024-10-15 02:14:01.053903] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:44:52.119 [2024-10-15 02:14:01.054083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79222 ] 00:44:52.377 [2024-10-15 02:14:01.223830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:52.636 [2024-10-15 02:14:01.466027] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:44:54.013  [2024-10-15T02:14:03.961Z] Copying: 211/1024 [MB] (211 MBps) [2024-10-15T02:14:04.898Z] Copying: 423/1024 [MB] (212 MBps) [2024-10-15T02:14:05.835Z] Copying: 635/1024 [MB] (211 MBps) [2024-10-15T02:14:06.790Z] Copying: 839/1024 [MB] (203 MBps) [2024-10-15T02:14:06.790Z] Copying: 1015/1024 [MB] (176 MBps) [2024-10-15T02:14:08.167Z] Copying: 1024/1024 [MB] (average 202 MBps) 00:44:59.155 00:44:59.155 02:14:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:45:01.062 02:14:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:45:01.062 [2024-10-15 02:14:09.728693] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:45:01.062 [2024-10-15 02:14:09.728857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79319 ] 00:45:01.062 [2024-10-15 02:14:09.904089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:01.321 [2024-10-15 02:14:10.142197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:45:02.699  [2024-10-15T02:14:12.648Z] Copying: 13/1024 [MB] (13 MBps) [2024-10-15T02:14:13.610Z] Copying: 26/1024 [MB] (12 MBps) [2024-10-15T02:14:14.546Z] Copying: 41/1024 [MB] (15 MBps) [2024-10-15T02:14:15.483Z] Copying: 55/1024 [MB] (14 MBps) [2024-10-15T02:14:16.419Z] Copying: 70/1024 [MB] (14 MBps) [2024-10-15T02:14:17.795Z] Copying: 85/1024 [MB] (14 MBps) [2024-10-15T02:14:18.730Z] Copying: 100/1024 [MB] (15 MBps) [2024-10-15T02:14:19.666Z] Copying: 115/1024 [MB] (14 MBps) [2024-10-15T02:14:20.601Z] Copying: 130/1024 [MB] (14 MBps) [2024-10-15T02:14:21.537Z] Copying: 145/1024 [MB] (14 MBps) [2024-10-15T02:14:22.474Z] Copying: 159/1024 [MB] (14 MBps) [2024-10-15T02:14:23.848Z] Copying: 173/1024 [MB] (13 MBps) [2024-10-15T02:14:24.784Z] Copying: 188/1024 [MB] (14 MBps) [2024-10-15T02:14:25.719Z] Copying: 202/1024 [MB] (14 MBps) [2024-10-15T02:14:26.656Z] Copying: 217/1024 [MB] (14 MBps) [2024-10-15T02:14:27.593Z] Copying: 232/1024 [MB] (15 MBps) [2024-10-15T02:14:28.529Z] Copying: 248/1024 [MB] (15 MBps) [2024-10-15T02:14:29.465Z] Copying: 263/1024 [MB] (15 MBps) [2024-10-15T02:14:30.840Z] Copying: 278/1024 [MB] (15 MBps) [2024-10-15T02:14:31.775Z] Copying: 293/1024 [MB] (15 MBps) [2024-10-15T02:14:32.711Z] Copying: 308/1024 [MB] (15 MBps) [2024-10-15T02:14:33.668Z] Copying: 323/1024 [MB] (15 MBps) [2024-10-15T02:14:34.617Z] Copying: 339/1024 [MB] (15 MBps) [2024-10-15T02:14:35.553Z] Copying: 354/1024 [MB] (15 MBps) [2024-10-15T02:14:36.489Z] Copying: 368/1024 [MB] (14 MBps) [2024-10-15T02:14:37.425Z] Copying: 383/1024 [MB] (14 MBps) [2024-10-15T02:14:38.800Z] Copying: 398/1024 [MB] (14 MBps) [2024-10-15T02:14:39.735Z] Copying: 413/1024 [MB] (15 MBps) [2024-10-15T02:14:40.671Z] Copying: 428/1024 [MB] (14 MBps) [2024-10-15T02:14:41.607Z] Copying: 443/1024 [MB] (14 MBps) [2024-10-15T02:14:42.543Z] Copying: 458/1024 [MB] (15 MBps) [2024-10-15T02:14:43.479Z] Copying: 473/1024 [MB] (15 MBps) [2024-10-15T02:14:44.857Z] Copying: 488/1024 [MB] (14 MBps) [2024-10-15T02:14:45.424Z] Copying: 503/1024 [MB] (14 MBps) [2024-10-15T02:14:46.800Z] Copying: 518/1024 [MB] (14 MBps) [2024-10-15T02:14:47.734Z] Copying: 532/1024 [MB] (14 MBps) [2024-10-15T02:14:48.670Z] Copying: 547/1024 [MB] (14 MBps) [2024-10-15T02:14:49.620Z] Copying: 562/1024 [MB] (14 MBps) [2024-10-15T02:14:50.554Z] Copying: 577/1024 [MB] (15 MBps) [2024-10-15T02:14:51.486Z] Copying: 591/1024 [MB] (14 MBps) [2024-10-15T02:14:52.422Z] Copying: 606/1024 [MB] (14 MBps) [2024-10-15T02:14:53.840Z] Copying: 621/1024 [MB] (14 MBps) [2024-10-15T02:14:54.784Z] Copying: 636/1024 [MB] (15 MBps) [2024-10-15T02:14:55.719Z] Copying: 651/1024 [MB] (14 MBps) [2024-10-15T02:14:56.655Z] Copying: 666/1024 [MB] (14 MBps) [2024-10-15T02:14:57.592Z] Copying: 681/1024 [MB] (15 MBps) [2024-10-15T02:14:58.533Z] Copying: 696/1024 [MB] (15 MBps) [2024-10-15T02:14:59.470Z] Copying: 711/1024 [MB] (14 MBps) [2024-10-15T02:15:00.850Z] Copying: 726/1024 [MB] (15 MBps) [2024-10-15T02:15:01.418Z] Copying: 742/1024 [MB] (15 MBps) [2024-10-15T02:15:02.797Z] Copying: 756/1024 [MB] (14 MBps) [2024-10-15T02:15:03.737Z] Copying: 770/1024 [MB] (13 MBps) [2024-10-15T02:15:04.674Z] Copying: 784/1024 [MB] (14 MBps) [2024-10-15T02:15:05.610Z] Copying: 799/1024 [MB] (14 MBps) [2024-10-15T02:15:06.545Z] Copying: 814/1024 [MB] (14 MBps) [2024-10-15T02:15:07.481Z] Copying: 829/1024 [MB] (14 MBps) [2024-10-15T02:15:08.417Z] Copying: 843/1024 [MB] (13 MBps) [2024-10-15T02:15:09.792Z] Copying: 858/1024 [MB] (14 MBps) [2024-10-15T02:15:10.728Z] Copying: 873/1024 [MB] (15 MBps) [2024-10-15T02:15:11.663Z] Copying: 887/1024 [MB] (14 MBps) [2024-10-15T02:15:12.598Z] Copying: 902/1024 [MB] (14 MBps) [2024-10-15T02:15:13.535Z] Copying: 917/1024 [MB] (14 MBps) [2024-10-15T02:15:14.503Z] Copying: 931/1024 [MB] (14 MBps) [2024-10-15T02:15:15.457Z] Copying: 946/1024 [MB] (14 MBps) [2024-10-15T02:15:16.839Z] Copying: 960/1024 [MB] (14 MBps) [2024-10-15T02:15:17.776Z] Copying: 975/1024 [MB] (14 MBps) [2024-10-15T02:15:18.713Z] Copying: 990/1024 [MB] (14 MBps) [2024-10-15T02:15:19.650Z] Copying: 1005/1024 [MB] (15 MBps) [2024-10-15T02:15:19.908Z] Copying: 1020/1024 [MB] (14 MBps) [2024-10-15T02:15:20.843Z] Copying: 1024/1024 [MB] (average 14 MBps) 00:46:11.831 00:46:11.831 02:15:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:46:11.831 02:15:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:46:12.090 02:15:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:46:12.659 [2024-10-15 02:15:21.386501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.386561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:12.659 [2024-10-15 02:15:21.386601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:12.659 [2024-10-15 02:15:21.386613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.386650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:12.659 [2024-10-15 02:15:21.389819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.390026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:12.659 [2024-10-15 02:15:21.390052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.145 ms 00:46:12.659 [2024-10-15 02:15:21.390068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.392944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.392992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:12.659 [2024-10-15 02:15:21.393009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.840 ms 00:46:12.659 [2024-10-15 02:15:21.393021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.409262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.409478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:12.659 [2024-10-15 02:15:21.409611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.219 ms 00:46:12.659 [2024-10-15 02:15:21.409665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.414899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.415067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:12.659 [2024-10-15 02:15:21.415094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.158 ms 00:46:12.659 [2024-10-15 02:15:21.415109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.440708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.440765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:12.659 [2024-10-15 02:15:21.440781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.501 ms 00:46:12.659 [2024-10-15 02:15:21.440793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.459103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.459288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:12.659 [2024-10-15 02:15:21.459315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.269 ms 00:46:12.659 [2024-10-15 02:15:21.459331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.459519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.459546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:12.659 [2024-10-15 02:15:21.459563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:46:12.659 [2024-10-15 02:15:21.459600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.485434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.485492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:12.659 [2024-10-15 02:15:21.485508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.808 ms 00:46:12.659 [2024-10-15 02:15:21.485520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.510489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.510543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:12.659 [2024-10-15 02:15:21.510575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.927 ms 00:46:12.659 [2024-10-15 02:15:21.510587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.534630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.534691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:12.659 [2024-10-15 02:15:21.534707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.001 ms 00:46:12.659 [2024-10-15 02:15:21.534719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.558555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.659 [2024-10-15 02:15:21.558614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:12.659 [2024-10-15 02:15:21.558630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.735 ms 00:46:12.659 [2024-10-15 02:15:21.558642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.659 [2024-10-15 02:15:21.558684] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:12.659 [2024-10-15 02:15:21.558711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:12.659 [2024-10-15 02:15:21.558726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:12.659 [2024-10-15 02:15:21.558739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:12.659 [2024-10-15 02:15:21.558749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:12.659 [2024-10-15 02:15:21.558764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:12.659 [2024-10-15 02:15:21.558774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:12.659 [2024-10-15 02:15:21.558786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:12.659 [2024-10-15 02:15:21.558796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.558986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:12.660 [2024-10-15 02:15:21.559953] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:12.660 [2024-10-15 02:15:21.559963] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 429ddebc-ad3c-4865-b14d-60695ae1c9ae 00:46:12.661 [2024-10-15 02:15:21.559976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:12.661 [2024-10-15 02:15:21.559986] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:12.661 [2024-10-15 02:15:21.559998] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:12.661 [2024-10-15 02:15:21.560008] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:12.661 [2024-10-15 02:15:21.560019] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:12.661 [2024-10-15 02:15:21.560031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:12.661 [2024-10-15 02:15:21.560043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:12.661 [2024-10-15 02:15:21.560053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:12.661 [2024-10-15 02:15:21.560064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:12.661 [2024-10-15 02:15:21.560074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.661 [2024-10-15 02:15:21.560086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:12.661 [2024-10-15 02:15:21.560098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:46:12.661 [2024-10-15 02:15:21.560113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.661 [2024-10-15 02:15:21.574381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.661 [2024-10-15 02:15:21.574643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:12.661 [2024-10-15 02:15:21.574671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.210 ms 00:46:12.661 [2024-10-15 02:15:21.574686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.661 [2024-10-15 02:15:21.575188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:12.661 [2024-10-15 02:15:21.575216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:12.661 [2024-10-15 02:15:21.575232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:46:12.661 [2024-10-15 02:15:21.575244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.661 [2024-10-15 02:15:21.616183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.661 [2024-10-15 02:15:21.616228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:12.661 [2024-10-15 02:15:21.616243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.661 [2024-10-15 02:15:21.616255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.661 [2024-10-15 02:15:21.616312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.661 [2024-10-15 02:15:21.616331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:12.661 [2024-10-15 02:15:21.616345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.661 [2024-10-15 02:15:21.616356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.661 [2024-10-15 02:15:21.616461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.661 [2024-10-15 02:15:21.616495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:12.661 [2024-10-15 02:15:21.616507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.661 [2024-10-15 02:15:21.616519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.661 [2024-10-15 02:15:21.616543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.661 [2024-10-15 02:15:21.616560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:12.661 [2024-10-15 02:15:21.616570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.661 [2024-10-15 02:15:21.616584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.920 [2024-10-15 02:15:21.701428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.920 [2024-10-15 02:15:21.701496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:12.920 [2024-10-15 02:15:21.701513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.920 [2024-10-15 02:15:21.701525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.920 [2024-10-15 02:15:21.770504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.920 [2024-10-15 02:15:21.770587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:12.920 [2024-10-15 02:15:21.770610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.920 [2024-10-15 02:15:21.770623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.920 [2024-10-15 02:15:21.770781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.920 [2024-10-15 02:15:21.770823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:12.920 [2024-10-15 02:15:21.770835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.920 [2024-10-15 02:15:21.770863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.920 [2024-10-15 02:15:21.770930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.920 [2024-10-15 02:15:21.770953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:12.920 [2024-10-15 02:15:21.770965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.920 [2024-10-15 02:15:21.770993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.920 [2024-10-15 02:15:21.771160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.920 [2024-10-15 02:15:21.771185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:12.920 [2024-10-15 02:15:21.771197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.920 [2024-10-15 02:15:21.771210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.920 [2024-10-15 02:15:21.771264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.920 [2024-10-15 02:15:21.771286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:12.920 [2024-10-15 02:15:21.771299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.920 [2024-10-15 02:15:21.771312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.920 [2024-10-15 02:15:21.771365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.920 [2024-10-15 02:15:21.771395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:12.920 [2024-10-15 02:15:21.771408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.920 [2024-10-15 02:15:21.771432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.920 [2024-10-15 02:15:21.771489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:12.920 [2024-10-15 02:15:21.771511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:12.920 [2024-10-15 02:15:21.771547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:12.920 [2024-10-15 02:15:21.771561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:12.920 [2024-10-15 02:15:21.771745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.192 ms, result 0 00:46:12.920 [2024-10-15 02:15:21.773132] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:46:12.920 true 00:46:12.920 02:15:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79074 00:46:12.920 02:15:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79074 00:46:12.920 02:15:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:46:12.920 [2024-10-15 02:15:21.877399] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:46:12.920 [2024-10-15 02:15:21.877555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80033 ] 00:46:13.179 [2024-10-15 02:15:22.036916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:13.438 [2024-10-15 02:15:22.224667] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:46:14.814  [2024-10-15T02:15:24.762Z] Copying: 208/1024 [MB] (208 MBps) [2024-10-15T02:15:25.698Z] Copying: 414/1024 [MB] (205 MBps) [2024-10-15T02:15:26.635Z] Copying: 624/1024 [MB] (210 MBps) [2024-10-15T02:15:27.569Z] Copying: 821/1024 [MB] (196 MBps) [2024-10-15T02:15:27.569Z] Copying: 1014/1024 [MB] (192 MBps) [2024-10-15T02:15:28.943Z] Copying: 1024/1024 [MB] (average 202 MBps) 00:46:19.931 00:46:19.931 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79074 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:46:19.931 02:15:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:46:19.931 [2024-10-15 02:15:28.646695] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:46:19.931 [2024-10-15 02:15:28.646923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80106 ] 00:46:19.931 [2024-10-15 02:15:28.811948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:20.190 [2024-10-15 02:15:28.997683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:46:20.449 [2024-10-15 02:15:29.304071] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:20.449 [2024-10-15 02:15:29.304147] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:20.449 [2024-10-15 02:15:29.370349] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:46:20.449 [2024-10-15 02:15:29.370761] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:46:20.449 [2024-10-15 02:15:29.371169] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:46:20.709 [2024-10-15 02:15:29.634639] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:46:20.709 [2024-10-15 02:15:29.647327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.647369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:20.709 [2024-10-15 02:15:29.647403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:20.709 [2024-10-15 02:15:29.647414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.647493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.647511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:20.709 [2024-10-15 02:15:29.647526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:46:20.709 [2024-10-15 02:15:29.647535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.647562] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:20.709 [2024-10-15 02:15:29.648322] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:20.709 [2024-10-15 02:15:29.648355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.648367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:20.709 [2024-10-15 02:15:29.648394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:46:20.709 [2024-10-15 02:15:29.648404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.650282] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:20.709 [2024-10-15 02:15:29.664517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.664713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:20.709 [2024-10-15 02:15:29.664741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.236 ms 00:46:20.709 [2024-10-15 02:15:29.664753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.664821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.664844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:20.709 [2024-10-15 02:15:29.664856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:46:20.709 [2024-10-15 02:15:29.664866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.673445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.673481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:20.709 [2024-10-15 02:15:29.673496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.489 ms 00:46:20.709 [2024-10-15 02:15:29.673506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.673594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.673611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:20.709 [2024-10-15 02:15:29.673623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:46:20.709 [2024-10-15 02:15:29.673632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.673685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.673700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:20.709 [2024-10-15 02:15:29.673712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:46:20.709 [2024-10-15 02:15:29.673721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.673753] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:20.709 [2024-10-15 02:15:29.678034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.678207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:20.709 [2024-10-15 02:15:29.678233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.289 ms 00:46:20.709 [2024-10-15 02:15:29.678253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.678292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.678307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:20.709 [2024-10-15 02:15:29.678319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:46:20.709 [2024-10-15 02:15:29.678329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.678394] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:20.709 [2024-10-15 02:15:29.678443] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:46:20.709 [2024-10-15 02:15:29.678487] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:20.709 [2024-10-15 02:15:29.678509] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:46:20.709 [2024-10-15 02:15:29.678651] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:20.709 [2024-10-15 02:15:29.678670] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:20.709 [2024-10-15 02:15:29.678684] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:20.709 [2024-10-15 02:15:29.678698] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:20.709 [2024-10-15 02:15:29.678711] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:20.709 [2024-10-15 02:15:29.678737] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:46:20.709 [2024-10-15 02:15:29.678761] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:20.709 [2024-10-15 02:15:29.678771] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:20.709 [2024-10-15 02:15:29.678780] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:20.709 [2024-10-15 02:15:29.678796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.678806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:20.709 [2024-10-15 02:15:29.678817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:46:20.709 [2024-10-15 02:15:29.678827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.678910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.709 [2024-10-15 02:15:29.678924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:20.709 [2024-10-15 02:15:29.678935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:46:20.709 [2024-10-15 02:15:29.678945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.709 [2024-10-15 02:15:29.679060] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:20.709 [2024-10-15 02:15:29.679082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:20.709 [2024-10-15 02:15:29.679093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:20.709 [2024-10-15 02:15:29.679103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:20.709 [2024-10-15 02:15:29.679114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:20.709 [2024-10-15 02:15:29.679122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:20.709 [2024-10-15 02:15:29.679132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:46:20.709 [2024-10-15 02:15:29.679141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:20.709 [2024-10-15 02:15:29.679161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:46:20.709 [2024-10-15 02:15:29.679170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:20.709 [2024-10-15 02:15:29.679179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:20.709 [2024-10-15 02:15:29.679188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:46:20.709 [2024-10-15 02:15:29.679197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:20.709 [2024-10-15 02:15:29.679206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:20.709 [2024-10-15 02:15:29.679217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:46:20.709 [2024-10-15 02:15:29.679226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:20.709 [2024-10-15 02:15:29.679235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:20.709 [2024-10-15 02:15:29.679244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:46:20.709 [2024-10-15 02:15:29.679253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:20.709 [2024-10-15 02:15:29.679262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:20.709 [2024-10-15 02:15:29.679272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:46:20.709 [2024-10-15 02:15:29.679281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:20.709 [2024-10-15 02:15:29.679290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:20.709 [2024-10-15 02:15:29.679298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:46:20.709 [2024-10-15 02:15:29.679308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:20.709 [2024-10-15 02:15:29.679316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:20.709 [2024-10-15 02:15:29.679325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:46:20.709 [2024-10-15 02:15:29.679334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:20.709 [2024-10-15 02:15:29.679343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:20.709 [2024-10-15 02:15:29.679351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:46:20.709 [2024-10-15 02:15:29.679360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:20.709 [2024-10-15 02:15:29.679369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:20.709 [2024-10-15 02:15:29.679378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:46:20.709 [2024-10-15 02:15:29.679386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:20.709 [2024-10-15 02:15:29.679395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:20.709 [2024-10-15 02:15:29.679404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:46:20.710 [2024-10-15 02:15:29.679413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:20.710 [2024-10-15 02:15:29.679422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:20.710 [2024-10-15 02:15:29.679431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:46:20.710 [2024-10-15 02:15:29.679440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:20.710 [2024-10-15 02:15:29.679449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:20.710 [2024-10-15 02:15:29.679457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:46:20.710 [2024-10-15 02:15:29.679482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:20.710 [2024-10-15 02:15:29.679494] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:20.710 [2024-10-15 02:15:29.679505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:20.710 [2024-10-15 02:15:29.679514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:20.710 [2024-10-15 02:15:29.679526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:20.710 [2024-10-15 02:15:29.679536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:20.710 [2024-10-15 02:15:29.679546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:20.710 [2024-10-15 02:15:29.679555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:20.710 [2024-10-15 02:15:29.679564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:20.710 [2024-10-15 02:15:29.679573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:20.710 [2024-10-15 02:15:29.679582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:20.710 [2024-10-15 02:15:29.679593] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:20.710 [2024-10-15 02:15:29.679605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:20.710 [2024-10-15 02:15:29.679615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:46:20.710 [2024-10-15 02:15:29.679625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:46:20.710 [2024-10-15 02:15:29.679635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:46:20.710 [2024-10-15 02:15:29.679647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:46:20.710 [2024-10-15 02:15:29.679656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:46:20.710 [2024-10-15 02:15:29.679667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:46:20.710 [2024-10-15 02:15:29.679676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:46:20.710 [2024-10-15 02:15:29.679686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:46:20.710 [2024-10-15 02:15:29.679695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:46:20.710 [2024-10-15 02:15:29.679705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:46:20.710 [2024-10-15 02:15:29.679714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:46:20.710 [2024-10-15 02:15:29.679724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:46:20.710 [2024-10-15 02:15:29.679733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:46:20.710 [2024-10-15 02:15:29.679743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:46:20.710 [2024-10-15 02:15:29.679753] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:20.710 [2024-10-15 02:15:29.679768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:20.710 [2024-10-15 02:15:29.679780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:20.710 [2024-10-15 02:15:29.679790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:20.710 [2024-10-15 02:15:29.679799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:20.710 [2024-10-15 02:15:29.679809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:20.710 [2024-10-15 02:15:29.679819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.710 [2024-10-15 02:15:29.679830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:20.710 [2024-10-15 02:15:29.679840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:46:20.710 [2024-10-15 02:15:29.679851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.725952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.968 [2024-10-15 02:15:29.726288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:20.968 [2024-10-15 02:15:29.726437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.042 ms 00:46:20.968 [2024-10-15 02:15:29.726492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.726781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.968 [2024-10-15 02:15:29.726865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:20.968 [2024-10-15 02:15:29.727041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:46:20.968 [2024-10-15 02:15:29.727181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.768378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.968 [2024-10-15 02:15:29.768704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:20.968 [2024-10-15 02:15:29.768824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.041 ms 00:46:20.968 [2024-10-15 02:15:29.768876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.768981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.968 [2024-10-15 02:15:29.769177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:20.968 [2024-10-15 02:15:29.769228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:20.968 [2024-10-15 02:15:29.769278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.770032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.968 [2024-10-15 02:15:29.770191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:20.968 [2024-10-15 02:15:29.770311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:46:20.968 [2024-10-15 02:15:29.770454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.770704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.968 [2024-10-15 02:15:29.770775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:20.968 [2024-10-15 02:15:29.770888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:46:20.968 [2024-10-15 02:15:29.770961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.788142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.968 [2024-10-15 02:15:29.788336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:20.968 [2024-10-15 02:15:29.788459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.974 ms 00:46:20.968 [2024-10-15 02:15:29.788517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.803181] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:46:20.968 [2024-10-15 02:15:29.803393] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:20.968 [2024-10-15 02:15:29.803574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.968 [2024-10-15 02:15:29.803618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:20.968 [2024-10-15 02:15:29.803753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.881 ms 00:46:20.968 [2024-10-15 02:15:29.803796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.828376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.968 [2024-10-15 02:15:29.828576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:20.968 [2024-10-15 02:15:29.828604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.508 ms 00:46:20.968 [2024-10-15 02:15:29.828619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.841611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.968 [2024-10-15 02:15:29.841650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:20.968 [2024-10-15 02:15:29.841665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.942 ms 00:46:20.968 [2024-10-15 02:15:29.841676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.854572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.968 [2024-10-15 02:15:29.854767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:20.968 [2024-10-15 02:15:29.854795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.855 ms 00:46:20.968 [2024-10-15 02:15:29.854810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.968 [2024-10-15 02:15:29.855789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.969 [2024-10-15 02:15:29.855819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:20.969 [2024-10-15 02:15:29.855834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:46:20.969 [2024-10-15 02:15:29.855861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.969 [2024-10-15 02:15:29.937253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.969 [2024-10-15 02:15:29.937335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:20.969 [2024-10-15 02:15:29.937357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.361 ms 00:46:20.969 [2024-10-15 02:15:29.937377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.969 [2024-10-15 02:15:29.949705] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:46:20.969 [2024-10-15 02:15:29.953773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.969 [2024-10-15 02:15:29.954005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:20.969 [2024-10-15 02:15:29.954038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.283 ms 00:46:20.969 [2024-10-15 02:15:29.954053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.969 [2024-10-15 02:15:29.954190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.969 [2024-10-15 02:15:29.954212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:20.969 [2024-10-15 02:15:29.954227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:46:20.969 [2024-10-15 02:15:29.954240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.969 [2024-10-15 02:15:29.954349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.969 [2024-10-15 02:15:29.954370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:20.969 [2024-10-15 02:15:29.954384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:46:20.969 [2024-10-15 02:15:29.954397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.969 [2024-10-15 02:15:29.954456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.969 [2024-10-15 02:15:29.954474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:20.969 [2024-10-15 02:15:29.954488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:46:20.969 [2024-10-15 02:15:29.954501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:20.969 [2024-10-15 02:15:29.954569] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:20.969 [2024-10-15 02:15:29.954588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:20.969 [2024-10-15 02:15:29.954605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:20.969 [2024-10-15 02:15:29.954618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:46:20.969 [2024-10-15 02:15:29.954630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:21.227 [2024-10-15 02:15:29.986564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:21.227 [2024-10-15 02:15:29.986638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:21.227 [2024-10-15 02:15:29.986666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.900 ms 00:46:21.227 [2024-10-15 02:15:29.986683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:21.227 [2024-10-15 02:15:29.986811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:21.227 [2024-10-15 02:15:29.986840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:21.227 [2024-10-15 02:15:29.986858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:46:21.227 [2024-10-15 02:15:29.986874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:21.227 [2024-10-15 02:15:29.988644] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.608 ms, result 0 00:46:22.161  [2024-10-15T02:15:32.107Z] Copying: 22/1024 [MB] (22 MBps) [2024-10-15T02:15:33.070Z] Copying: 45/1024 [MB] (22 MBps) [2024-10-15T02:15:34.016Z] Copying: 67/1024 [MB] (22 MBps) [2024-10-15T02:15:35.394Z] Copying: 89/1024 [MB] (21 MBps) [2024-10-15T02:15:36.330Z] Copying: 112/1024 [MB] (22 MBps) [2024-10-15T02:15:37.267Z] Copying: 135/1024 [MB] (22 MBps) [2024-10-15T02:15:38.202Z] Copying: 158/1024 [MB] (23 MBps) [2024-10-15T02:15:39.138Z] Copying: 181/1024 [MB] (22 MBps) [2024-10-15T02:15:40.073Z] Copying: 204/1024 [MB] (22 MBps) [2024-10-15T02:15:41.009Z] Copying: 226/1024 [MB] (22 MBps) [2024-10-15T02:15:42.384Z] Copying: 247/1024 [MB] (21 MBps) [2024-10-15T02:15:43.322Z] Copying: 269/1024 [MB] (21 MBps) [2024-10-15T02:15:44.259Z] Copying: 291/1024 [MB] (21 MBps) [2024-10-15T02:15:45.195Z] Copying: 312/1024 [MB] (21 MBps) [2024-10-15T02:15:46.131Z] Copying: 334/1024 [MB] (21 MBps) [2024-10-15T02:15:47.068Z] Copying: 355/1024 [MB] (21 MBps) [2024-10-15T02:15:48.005Z] Copying: 377/1024 [MB] (21 MBps) [2024-10-15T02:15:49.382Z] Copying: 398/1024 [MB] (21 MBps) [2024-10-15T02:15:50.321Z] Copying: 419/1024 [MB] (21 MBps) [2024-10-15T02:15:51.257Z] Copying: 441/1024 [MB] (21 MBps) [2024-10-15T02:15:52.193Z] Copying: 463/1024 [MB] (21 MBps) [2024-10-15T02:15:53.164Z] Copying: 484/1024 [MB] (21 MBps) [2024-10-15T02:15:54.116Z] Copying: 506/1024 [MB] (21 MBps) [2024-10-15T02:15:55.053Z] Copying: 528/1024 [MB] (21 MBps) [2024-10-15T02:15:56.429Z] Copying: 549/1024 [MB] (21 MBps) [2024-10-15T02:15:57.001Z] Copying: 571/1024 [MB] (21 MBps) [2024-10-15T02:15:58.379Z] Copying: 593/1024 [MB] (22 MBps) [2024-10-15T02:15:59.318Z] Copying: 614/1024 [MB] (21 MBps) [2024-10-15T02:16:00.257Z] Copying: 635/1024 [MB] (21 MBps) [2024-10-15T02:16:01.193Z] Copying: 657/1024 [MB] (21 MBps) [2024-10-15T02:16:02.130Z] Copying: 678/1024 [MB] (21 MBps) [2024-10-15T02:16:03.067Z] Copying: 700/1024 [MB] (21 MBps) [2024-10-15T02:16:04.007Z] Copying: 721/1024 [MB] (21 MBps) [2024-10-15T02:16:05.384Z] Copying: 743/1024 [MB] (21 MBps) [2024-10-15T02:16:06.320Z] Copying: 764/1024 [MB] (20 MBps) [2024-10-15T02:16:07.256Z] Copying: 785/1024 [MB] (21 MBps) [2024-10-15T02:16:08.192Z] Copying: 807/1024 [MB] (21 MBps) [2024-10-15T02:16:09.128Z] Copying: 828/1024 [MB] (21 MBps) [2024-10-15T02:16:10.066Z] Copying: 849/1024 [MB] (21 MBps) [2024-10-15T02:16:11.004Z] Copying: 871/1024 [MB] (21 MBps) [2024-10-15T02:16:12.381Z] Copying: 892/1024 [MB] (21 MBps) [2024-10-15T02:16:13.328Z] Copying: 913/1024 [MB] (21 MBps) [2024-10-15T02:16:14.273Z] Copying: 935/1024 [MB] (21 MBps) [2024-10-15T02:16:15.209Z] Copying: 956/1024 [MB] (21 MBps) [2024-10-15T02:16:16.145Z] Copying: 978/1024 [MB] (21 MBps) [2024-10-15T02:16:17.080Z] Copying: 999/1024 [MB] (21 MBps) [2024-10-15T02:16:18.027Z] Copying: 1021/1024 [MB] (21 MBps) [2024-10-15T02:16:18.027Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-10-15 02:16:17.798776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.015 [2024-10-15 02:16:17.798927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:09.015 [2024-10-15 02:16:17.798954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:09.015 [2024-10-15 02:16:17.798968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.015 [2024-10-15 02:16:17.800342] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:09.015 [2024-10-15 02:16:17.807291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.015 [2024-10-15 02:16:17.807493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:09.015 [2024-10-15 02:16:17.807653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.648 ms 00:47:09.015 [2024-10-15 02:16:17.807726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.015 [2024-10-15 02:16:17.818336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.015 [2024-10-15 02:16:17.818530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:09.015 [2024-10-15 02:16:17.818720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.129 ms 00:47:09.015 [2024-10-15 02:16:17.818775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.015 [2024-10-15 02:16:17.838943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.015 [2024-10-15 02:16:17.839205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:09.015 [2024-10-15 02:16:17.839327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.028 ms 00:47:09.015 [2024-10-15 02:16:17.839381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.015 [2024-10-15 02:16:17.844804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.015 [2024-10-15 02:16:17.844963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:09.015 [2024-10-15 02:16:17.845082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.235 ms 00:47:09.015 [2024-10-15 02:16:17.845136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.015 [2024-10-15 02:16:17.871379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.015 [2024-10-15 02:16:17.871582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:09.015 [2024-10-15 02:16:17.871719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.135 ms 00:47:09.015 [2024-10-15 02:16:17.871775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.015 [2024-10-15 02:16:17.887489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.015 [2024-10-15 02:16:17.887675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:09.015 [2024-10-15 02:16:17.887814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.620 ms 00:47:09.015 [2024-10-15 02:16:17.887868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.015 [2024-10-15 02:16:17.970850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.015 [2024-10-15 02:16:17.971060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:09.015 [2024-10-15 02:16:17.971206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.901 ms 00:47:09.015 [2024-10-15 02:16:17.971262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.015 [2024-10-15 02:16:17.996182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.015 [2024-10-15 02:16:17.996364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:09.015 [2024-10-15 02:16:17.996535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.795 ms 00:47:09.015 [2024-10-15 02:16:17.996685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.015 [2024-10-15 02:16:18.021139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.015 [2024-10-15 02:16:18.021305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:09.015 [2024-10-15 02:16:18.021473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.365 ms 00:47:09.015 [2024-10-15 02:16:18.021531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.274 [2024-10-15 02:16:18.045504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.274 [2024-10-15 02:16:18.045671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:09.274 [2024-10-15 02:16:18.045787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.894 ms 00:47:09.274 [2024-10-15 02:16:18.045839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.274 [2024-10-15 02:16:18.069782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.274 [2024-10-15 02:16:18.069948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:09.274 [2024-10-15 02:16:18.070064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.840 ms 00:47:09.274 [2024-10-15 02:16:18.070117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.274 [2024-10-15 02:16:18.070194] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:09.274 [2024-10-15 02:16:18.070449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 83968 / 261120 wr_cnt: 1 state: open 00:47:09.274 [2024-10-15 02:16:18.070548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.070775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.070865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:09.274 [2024-10-15 02:16:18.071946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.071959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.071971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.071984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.071996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:09.275 [2024-10-15 02:16:18.072598] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:09.275 [2024-10-15 02:16:18.072611] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 429ddebc-ad3c-4865-b14d-60695ae1c9ae 00:47:09.275 [2024-10-15 02:16:18.072624] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 83968 00:47:09.275 [2024-10-15 02:16:18.072637] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 84928 00:47:09.275 [2024-10-15 02:16:18.072650] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 83968 00:47:09.275 [2024-10-15 02:16:18.072663] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0114 00:47:09.275 [2024-10-15 02:16:18.072675] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:09.275 [2024-10-15 02:16:18.072705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:09.275 [2024-10-15 02:16:18.072724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:09.275 [2024-10-15 02:16:18.072746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:09.275 [2024-10-15 02:16:18.072758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:09.275 [2024-10-15 02:16:18.072771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.275 [2024-10-15 02:16:18.072783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:09.275 [2024-10-15 02:16:18.072797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:47:09.275 [2024-10-15 02:16:18.072810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.275 [2024-10-15 02:16:18.087042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.275 [2024-10-15 02:16:18.087081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:09.275 [2024-10-15 02:16:18.087099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.205 ms 00:47:09.275 [2024-10-15 02:16:18.087111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.275 [2024-10-15 02:16:18.087643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.275 [2024-10-15 02:16:18.087674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:09.275 [2024-10-15 02:16:18.087691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:47:09.275 [2024-10-15 02:16:18.087704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.275 [2024-10-15 02:16:18.120780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.275 [2024-10-15 02:16:18.120827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:09.275 [2024-10-15 02:16:18.120852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.275 [2024-10-15 02:16:18.120864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.275 [2024-10-15 02:16:18.120922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.275 [2024-10-15 02:16:18.120940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:09.275 [2024-10-15 02:16:18.120954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.275 [2024-10-15 02:16:18.120965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.275 [2024-10-15 02:16:18.121062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.275 [2024-10-15 02:16:18.121084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:09.275 [2024-10-15 02:16:18.121098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.275 [2024-10-15 02:16:18.121118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.275 [2024-10-15 02:16:18.121144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.275 [2024-10-15 02:16:18.121160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:09.275 [2024-10-15 02:16:18.121173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.275 [2024-10-15 02:16:18.121185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.275 [2024-10-15 02:16:18.210695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.275 [2024-10-15 02:16:18.210779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:09.275 [2024-10-15 02:16:18.210801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.275 [2024-10-15 02:16:18.210823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.275 [2024-10-15 02:16:18.283372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.275 [2024-10-15 02:16:18.283475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:09.275 [2024-10-15 02:16:18.283499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.275 [2024-10-15 02:16:18.283528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.275 [2024-10-15 02:16:18.283648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.275 [2024-10-15 02:16:18.283669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:09.275 [2024-10-15 02:16:18.283684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.275 [2024-10-15 02:16:18.283698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.275 [2024-10-15 02:16:18.283789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.275 [2024-10-15 02:16:18.283809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:09.275 [2024-10-15 02:16:18.283838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.275 [2024-10-15 02:16:18.283853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.275 [2024-10-15 02:16:18.284026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.276 [2024-10-15 02:16:18.284049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:09.276 [2024-10-15 02:16:18.284064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.276 [2024-10-15 02:16:18.284078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.534 [2024-10-15 02:16:18.284140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.534 [2024-10-15 02:16:18.284177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:09.534 [2024-10-15 02:16:18.284193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.534 [2024-10-15 02:16:18.284206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.534 [2024-10-15 02:16:18.284266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.534 [2024-10-15 02:16:18.284286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:09.534 [2024-10-15 02:16:18.284301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.534 [2024-10-15 02:16:18.284314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.534 [2024-10-15 02:16:18.284408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:09.534 [2024-10-15 02:16:18.284429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:09.534 [2024-10-15 02:16:18.284445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:09.534 [2024-10-15 02:16:18.284474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.534 [2024-10-15 02:16:18.284664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 487.760 ms, result 0 00:47:09.534 [2024-10-15 02:16:18.285701] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:47:09.534 [2024-10-15 02:16:18.289034] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:47:10.911 00:47:10.911 00:47:10.911 02:16:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:47:12.812 02:16:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:47:12.812 [2024-10-15 02:16:21.586261] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:47:12.812 [2024-10-15 02:16:21.586487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80620 ] 00:47:12.812 [2024-10-15 02:16:21.760671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:13.071 [2024-10-15 02:16:21.983730] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:47:13.330 [2024-10-15 02:16:22.288385] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:13.330 [2024-10-15 02:16:22.288468] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:13.590 [2024-10-15 02:16:22.434201] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:47:13.590 [2024-10-15 02:16:22.447046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.590 [2024-10-15 02:16:22.447088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:13.590 [2024-10-15 02:16:22.447106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:13.590 [2024-10-15 02:16:22.447121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.590 [2024-10-15 02:16:22.447178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.590 [2024-10-15 02:16:22.447195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:13.590 [2024-10-15 02:16:22.447207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:47:13.590 [2024-10-15 02:16:22.447216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.590 [2024-10-15 02:16:22.447243] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:13.590 [2024-10-15 02:16:22.448007] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:13.590 [2024-10-15 02:16:22.448046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.590 [2024-10-15 02:16:22.448059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:13.590 [2024-10-15 02:16:22.448070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:47:13.590 [2024-10-15 02:16:22.448080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.590 [2024-10-15 02:16:22.449857] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:13.590 [2024-10-15 02:16:22.463749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.590 [2024-10-15 02:16:22.463790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:13.590 [2024-10-15 02:16:22.463806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.893 ms 00:47:13.590 [2024-10-15 02:16:22.463817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.590 [2024-10-15 02:16:22.463879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.590 [2024-10-15 02:16:22.463897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:13.590 [2024-10-15 02:16:22.463909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:47:13.590 [2024-10-15 02:16:22.463919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.590 [2024-10-15 02:16:22.472485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.590 [2024-10-15 02:16:22.472523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:13.590 [2024-10-15 02:16:22.472538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.484 ms 00:47:13.590 [2024-10-15 02:16:22.472548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.590 [2024-10-15 02:16:22.472633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.590 [2024-10-15 02:16:22.472650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:13.590 [2024-10-15 02:16:22.472662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:47:13.590 [2024-10-15 02:16:22.472671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.590 [2024-10-15 02:16:22.472727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.590 [2024-10-15 02:16:22.472743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:13.590 [2024-10-15 02:16:22.472755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:47:13.590 [2024-10-15 02:16:22.472765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.590 [2024-10-15 02:16:22.472796] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:13.590 [2024-10-15 02:16:22.477131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.590 [2024-10-15 02:16:22.477165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:13.590 [2024-10-15 02:16:22.477179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.343 ms 00:47:13.590 [2024-10-15 02:16:22.477188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.590 [2024-10-15 02:16:22.477222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.590 [2024-10-15 02:16:22.477236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:13.590 [2024-10-15 02:16:22.477247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:47:13.591 [2024-10-15 02:16:22.477264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.591 [2024-10-15 02:16:22.477327] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:13.591 [2024-10-15 02:16:22.477356] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:47:13.591 [2024-10-15 02:16:22.477392] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:13.591 [2024-10-15 02:16:22.477427] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:47:13.591 [2024-10-15 02:16:22.477522] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:47:13.591 [2024-10-15 02:16:22.477537] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:13.591 [2024-10-15 02:16:22.477556] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:47:13.591 [2024-10-15 02:16:22.477571] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:13.591 [2024-10-15 02:16:22.477583] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:13.591 [2024-10-15 02:16:22.477593] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:13.591 [2024-10-15 02:16:22.477603] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:13.591 [2024-10-15 02:16:22.477613] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:47:13.591 [2024-10-15 02:16:22.477624] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:47:13.591 [2024-10-15 02:16:22.477635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.591 [2024-10-15 02:16:22.477645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:13.591 [2024-10-15 02:16:22.477656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:47:13.591 [2024-10-15 02:16:22.477666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.591 [2024-10-15 02:16:22.477751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.591 [2024-10-15 02:16:22.477766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:13.591 [2024-10-15 02:16:22.477777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:47:13.591 [2024-10-15 02:16:22.477787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.591 [2024-10-15 02:16:22.477884] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:13.591 [2024-10-15 02:16:22.477902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:13.591 [2024-10-15 02:16:22.477914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:13.591 [2024-10-15 02:16:22.477925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.591 [2024-10-15 02:16:22.477935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:13.591 [2024-10-15 02:16:22.477944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:13.591 [2024-10-15 02:16:22.477954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:13.591 [2024-10-15 02:16:22.477964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:13.591 [2024-10-15 02:16:22.477974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:13.591 [2024-10-15 02:16:22.477983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:13.591 [2024-10-15 02:16:22.477992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:13.591 [2024-10-15 02:16:22.478002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:13.591 [2024-10-15 02:16:22.478025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:13.591 [2024-10-15 02:16:22.478035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:13.591 [2024-10-15 02:16:22.478045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:47:13.591 [2024-10-15 02:16:22.478055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.591 [2024-10-15 02:16:22.478063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:13.591 [2024-10-15 02:16:22.478072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:47:13.591 [2024-10-15 02:16:22.478081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.591 [2024-10-15 02:16:22.478090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:13.591 [2024-10-15 02:16:22.478099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:13.591 [2024-10-15 02:16:22.478108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:13.591 [2024-10-15 02:16:22.478117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:13.591 [2024-10-15 02:16:22.478125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:13.591 [2024-10-15 02:16:22.478134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:13.591 [2024-10-15 02:16:22.478142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:13.591 [2024-10-15 02:16:22.478151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:13.591 [2024-10-15 02:16:22.478160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:13.591 [2024-10-15 02:16:22.478169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:13.591 [2024-10-15 02:16:22.478177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:47:13.591 [2024-10-15 02:16:22.478186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:13.591 [2024-10-15 02:16:22.478196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:13.591 [2024-10-15 02:16:22.478204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:47:13.591 [2024-10-15 02:16:22.478214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:13.591 [2024-10-15 02:16:22.478222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:13.591 [2024-10-15 02:16:22.478231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:47:13.591 [2024-10-15 02:16:22.478240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:13.591 [2024-10-15 02:16:22.478249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:47:13.591 [2024-10-15 02:16:22.478258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:47:13.591 [2024-10-15 02:16:22.478266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.591 [2024-10-15 02:16:22.478276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:47:13.591 [2024-10-15 02:16:22.478284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:47:13.591 [2024-10-15 02:16:22.478293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.591 [2024-10-15 02:16:22.478302] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:13.591 [2024-10-15 02:16:22.478317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:13.591 [2024-10-15 02:16:22.478328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:13.591 [2024-10-15 02:16:22.478339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:13.591 [2024-10-15 02:16:22.478349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:13.591 [2024-10-15 02:16:22.478359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:13.591 [2024-10-15 02:16:22.478368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:13.591 [2024-10-15 02:16:22.478377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:13.591 [2024-10-15 02:16:22.478387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:13.591 [2024-10-15 02:16:22.478396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:13.591 [2024-10-15 02:16:22.478421] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:13.591 [2024-10-15 02:16:22.478435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:13.591 [2024-10-15 02:16:22.478447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:13.591 [2024-10-15 02:16:22.478457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:47:13.591 [2024-10-15 02:16:22.478467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:47:13.591 [2024-10-15 02:16:22.478476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:47:13.592 [2024-10-15 02:16:22.478486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:47:13.592 [2024-10-15 02:16:22.478495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:47:13.592 [2024-10-15 02:16:22.478505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:47:13.592 [2024-10-15 02:16:22.478575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:47:13.592 [2024-10-15 02:16:22.478588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:47:13.592 [2024-10-15 02:16:22.478599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:47:13.592 [2024-10-15 02:16:22.478610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:47:13.592 [2024-10-15 02:16:22.478621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:47:13.592 [2024-10-15 02:16:22.478633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:47:13.592 [2024-10-15 02:16:22.478646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:47:13.592 [2024-10-15 02:16:22.478657] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:13.592 [2024-10-15 02:16:22.478669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:13.592 [2024-10-15 02:16:22.478682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:13.592 [2024-10-15 02:16:22.478693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:13.592 [2024-10-15 02:16:22.478704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:13.592 [2024-10-15 02:16:22.478716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:13.592 [2024-10-15 02:16:22.478728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.592 [2024-10-15 02:16:22.478739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:13.592 [2024-10-15 02:16:22.478758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:47:13.592 [2024-10-15 02:16:22.478771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.592 [2024-10-15 02:16:22.522799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.592 [2024-10-15 02:16:22.522855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:13.592 [2024-10-15 02:16:22.522874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.959 ms 00:47:13.592 [2024-10-15 02:16:22.522891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.592 [2024-10-15 02:16:22.522997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.592 [2024-10-15 02:16:22.523013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:13.592 [2024-10-15 02:16:22.523025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:47:13.592 [2024-10-15 02:16:22.523035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.592 [2024-10-15 02:16:22.560077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.592 [2024-10-15 02:16:22.560131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:13.592 [2024-10-15 02:16:22.560147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.957 ms 00:47:13.592 [2024-10-15 02:16:22.560158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.592 [2024-10-15 02:16:22.560208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.592 [2024-10-15 02:16:22.560224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:13.592 [2024-10-15 02:16:22.560235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:13.592 [2024-10-15 02:16:22.560246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.592 [2024-10-15 02:16:22.560895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.592 [2024-10-15 02:16:22.560920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:13.592 [2024-10-15 02:16:22.560942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:47:13.592 [2024-10-15 02:16:22.560953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.592 [2024-10-15 02:16:22.561107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.592 [2024-10-15 02:16:22.561125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:13.592 [2024-10-15 02:16:22.561138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:47:13.592 [2024-10-15 02:16:22.561148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.592 [2024-10-15 02:16:22.576973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.592 [2024-10-15 02:16:22.577155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:13.592 [2024-10-15 02:16:22.577297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.799 ms 00:47:13.592 [2024-10-15 02:16:22.577347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.592 [2024-10-15 02:16:22.591235] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:47:13.592 [2024-10-15 02:16:22.591432] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:13.592 [2024-10-15 02:16:22.591458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.592 [2024-10-15 02:16:22.591470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:13.592 [2024-10-15 02:16:22.591483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.933 ms 00:47:13.592 [2024-10-15 02:16:22.591494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.851 [2024-10-15 02:16:22.616418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.851 [2024-10-15 02:16:22.616460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:13.851 [2024-10-15 02:16:22.616475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.878 ms 00:47:13.851 [2024-10-15 02:16:22.616486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.851 [2024-10-15 02:16:22.629153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.851 [2024-10-15 02:16:22.629194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:13.851 [2024-10-15 02:16:22.629209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.615 ms 00:47:13.851 [2024-10-15 02:16:22.629220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.851 [2024-10-15 02:16:22.641526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.851 [2024-10-15 02:16:22.641566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:13.851 [2024-10-15 02:16:22.641580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.253 ms 00:47:13.851 [2024-10-15 02:16:22.641590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.851 [2024-10-15 02:16:22.642218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.851 [2024-10-15 02:16:22.642245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:13.851 [2024-10-15 02:16:22.642259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:47:13.851 [2024-10-15 02:16:22.642269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.851 [2024-10-15 02:16:22.707544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.851 [2024-10-15 02:16:22.707610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:13.851 [2024-10-15 02:16:22.707630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.251 ms 00:47:13.851 [2024-10-15 02:16:22.707642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.851 [2024-10-15 02:16:22.718300] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:13.851 [2024-10-15 02:16:22.721272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.851 [2024-10-15 02:16:22.721305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:13.851 [2024-10-15 02:16:22.721328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.565 ms 00:47:13.851 [2024-10-15 02:16:22.721339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.851 [2024-10-15 02:16:22.721466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.851 [2024-10-15 02:16:22.721487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:13.851 [2024-10-15 02:16:22.721502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:47:13.851 [2024-10-15 02:16:22.721512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.851 [2024-10-15 02:16:22.723263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.851 [2024-10-15 02:16:22.723298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:13.851 [2024-10-15 02:16:22.723312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:47:13.851 [2024-10-15 02:16:22.723328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.851 [2024-10-15 02:16:22.723362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.851 [2024-10-15 02:16:22.723378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:13.851 [2024-10-15 02:16:22.723390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:47:13.851 [2024-10-15 02:16:22.723401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.851 [2024-10-15 02:16:22.723481] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:13.851 [2024-10-15 02:16:22.723499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.851 [2024-10-15 02:16:22.723510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:13.851 [2024-10-15 02:16:22.723527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:47:13.851 [2024-10-15 02:16:22.723538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.851 [2024-10-15 02:16:22.752097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.851 [2024-10-15 02:16:22.752138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:13.851 [2024-10-15 02:16:22.752155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.526 ms 00:47:13.851 [2024-10-15 02:16:22.752167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.852 [2024-10-15 02:16:22.752249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:13.852 [2024-10-15 02:16:22.752267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:13.852 [2024-10-15 02:16:22.752294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:47:13.852 [2024-10-15 02:16:22.752308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:13.852 [2024-10-15 02:16:22.753800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.115 ms, result 0 00:47:15.244  [2024-10-15T02:16:25.191Z] Copying: 1252/1048576 [kB] (1252 kBps) [2024-10-15T02:16:26.127Z] Copying: 4008/1048576 [kB] (2756 kBps) [2024-10-15T02:16:27.066Z] Copying: 20/1024 [MB] (16 MBps) [2024-10-15T02:16:28.003Z] Copying: 49/1024 [MB] (28 MBps) [2024-10-15T02:16:29.379Z] Copying: 77/1024 [MB] (28 MBps) [2024-10-15T02:16:30.316Z] Copying: 104/1024 [MB] (27 MBps) [2024-10-15T02:16:31.253Z] Copying: 131/1024 [MB] (26 MBps) [2024-10-15T02:16:32.189Z] Copying: 158/1024 [MB] (27 MBps) [2024-10-15T02:16:33.135Z] Copying: 186/1024 [MB] (27 MBps) [2024-10-15T02:16:34.097Z] Copying: 215/1024 [MB] (28 MBps) [2024-10-15T02:16:35.034Z] Copying: 244/1024 [MB] (29 MBps) [2024-10-15T02:16:35.970Z] Copying: 273/1024 [MB] (29 MBps) [2024-10-15T02:16:37.348Z] Copying: 302/1024 [MB] (29 MBps) [2024-10-15T02:16:38.283Z] Copying: 331/1024 [MB] (29 MBps) [2024-10-15T02:16:38.962Z] Copying: 360/1024 [MB] (29 MBps) [2024-10-15T02:16:40.339Z] Copying: 389/1024 [MB] (28 MBps) [2024-10-15T02:16:41.275Z] Copying: 418/1024 [MB] (29 MBps) [2024-10-15T02:16:42.211Z] Copying: 446/1024 [MB] (28 MBps) [2024-10-15T02:16:43.147Z] Copying: 475/1024 [MB] (28 MBps) [2024-10-15T02:16:44.083Z] Copying: 503/1024 [MB] (28 MBps) [2024-10-15T02:16:45.021Z] Copying: 532/1024 [MB] (28 MBps) [2024-10-15T02:16:45.958Z] Copying: 561/1024 [MB] (28 MBps) [2024-10-15T02:16:47.334Z] Copying: 590/1024 [MB] (29 MBps) [2024-10-15T02:16:48.270Z] Copying: 619/1024 [MB] (29 MBps) [2024-10-15T02:16:49.206Z] Copying: 648/1024 [MB] (29 MBps) [2024-10-15T02:16:50.142Z] Copying: 677/1024 [MB] (28 MBps) [2024-10-15T02:16:51.078Z] Copying: 706/1024 [MB] (29 MBps) [2024-10-15T02:16:52.014Z] Copying: 736/1024 [MB] (29 MBps) [2024-10-15T02:16:52.960Z] Copying: 765/1024 [MB] (29 MBps) [2024-10-15T02:16:54.337Z] Copying: 794/1024 [MB] (29 MBps) [2024-10-15T02:16:55.273Z] Copying: 824/1024 [MB] (29 MBps) [2024-10-15T02:16:56.209Z] Copying: 852/1024 [MB] (28 MBps) [2024-10-15T02:16:57.145Z] Copying: 880/1024 [MB] (28 MBps) [2024-10-15T02:16:58.082Z] Copying: 909/1024 [MB] (28 MBps) [2024-10-15T02:16:59.019Z] Copying: 937/1024 [MB] (28 MBps) [2024-10-15T02:16:59.956Z] Copying: 965/1024 [MB] (28 MBps) [2024-10-15T02:17:01.333Z] Copying: 993/1024 [MB] (28 MBps) [2024-10-15T02:17:01.333Z] Copying: 1020/1024 [MB] (27 MBps) [2024-10-15T02:17:01.333Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-15 02:17:01.236935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.321 [2024-10-15 02:17:01.237009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:52.321 [2024-10-15 02:17:01.237033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:52.321 [2024-10-15 02:17:01.237047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.321 [2024-10-15 02:17:01.237082] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:52.321 [2024-10-15 02:17:01.241048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.321 [2024-10-15 02:17:01.241081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:52.321 [2024-10-15 02:17:01.241097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.937 ms 00:47:52.321 [2024-10-15 02:17:01.241110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.321 [2024-10-15 02:17:01.241368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.321 [2024-10-15 02:17:01.241390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:52.321 [2024-10-15 02:17:01.241404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:47:52.321 [2024-10-15 02:17:01.241448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.321 [2024-10-15 02:17:01.255458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.321 [2024-10-15 02:17:01.255667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:52.321 [2024-10-15 02:17:01.255816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.980 ms 00:47:52.321 [2024-10-15 02:17:01.255966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.321 [2024-10-15 02:17:01.262600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.321 [2024-10-15 02:17:01.262769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:52.321 [2024-10-15 02:17:01.262941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.543 ms 00:47:52.321 [2024-10-15 02:17:01.262996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.321 [2024-10-15 02:17:01.290604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.321 [2024-10-15 02:17:01.290779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:52.321 [2024-10-15 02:17:01.290806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.469 ms 00:47:52.321 [2024-10-15 02:17:01.290818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.321 [2024-10-15 02:17:01.306209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.321 [2024-10-15 02:17:01.306242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:52.321 [2024-10-15 02:17:01.306256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.303 ms 00:47:52.321 [2024-10-15 02:17:01.306273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.321 [2024-10-15 02:17:01.307911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.321 [2024-10-15 02:17:01.307936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:52.321 [2024-10-15 02:17:01.307949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:47:52.321 [2024-10-15 02:17:01.307959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.581 [2024-10-15 02:17:01.334822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.581 [2024-10-15 02:17:01.335033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:52.581 [2024-10-15 02:17:01.335194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.843 ms 00:47:52.581 [2024-10-15 02:17:01.335259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.581 [2024-10-15 02:17:01.364984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.581 [2024-10-15 02:17:01.365140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:52.581 [2024-10-15 02:17:01.365245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.646 ms 00:47:52.581 [2024-10-15 02:17:01.365290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.581 [2024-10-15 02:17:01.389717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.581 [2024-10-15 02:17:01.389878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:52.581 [2024-10-15 02:17:01.389982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.256 ms 00:47:52.581 [2024-10-15 02:17:01.390027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.581 [2024-10-15 02:17:01.413908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.581 [2024-10-15 02:17:01.414054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:52.581 [2024-10-15 02:17:01.414169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.788 ms 00:47:52.581 [2024-10-15 02:17:01.414214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.581 [2024-10-15 02:17:01.414280] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:52.581 [2024-10-15 02:17:01.414492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:47:52.581 [2024-10-15 02:17:01.414632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:47:52.581 [2024-10-15 02:17:01.414757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.414820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.415025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.415146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.415268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.415384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.415553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.415675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.415873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.415933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.416048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.416068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.416079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.416091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.416111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.416122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.416132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.416143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:52.581 [2024-10-15 02:17:01.416154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.416989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.417000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.417011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.417021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.417032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.417042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.417053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.417063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.417074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.417085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:52.582 [2024-10-15 02:17:01.417104] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:52.582 [2024-10-15 02:17:01.417115] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 429ddebc-ad3c-4865-b14d-60695ae1c9ae 00:47:52.582 [2024-10-15 02:17:01.417125] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:47:52.582 [2024-10-15 02:17:01.417135] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 180672 00:47:52.582 [2024-10-15 02:17:01.417145] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 178688 00:47:52.582 [2024-10-15 02:17:01.417157] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0111 00:47:52.582 [2024-10-15 02:17:01.417167] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:52.582 [2024-10-15 02:17:01.417178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:52.582 [2024-10-15 02:17:01.417188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:52.582 [2024-10-15 02:17:01.417197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:52.582 [2024-10-15 02:17:01.417207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:52.582 [2024-10-15 02:17:01.417217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.582 [2024-10-15 02:17:01.417240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:52.582 [2024-10-15 02:17:01.417255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.939 ms 00:47:52.582 [2024-10-15 02:17:01.417266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.582 [2024-10-15 02:17:01.431786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.582 [2024-10-15 02:17:01.431819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:52.582 [2024-10-15 02:17:01.431835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.494 ms 00:47:52.582 [2024-10-15 02:17:01.431845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.582 [2024-10-15 02:17:01.432250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.582 [2024-10-15 02:17:01.432272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:52.583 [2024-10-15 02:17:01.432285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:47:52.583 [2024-10-15 02:17:01.432294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.583 [2024-10-15 02:17:01.464507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.583 [2024-10-15 02:17:01.464546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:52.583 [2024-10-15 02:17:01.464561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.583 [2024-10-15 02:17:01.464572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.583 [2024-10-15 02:17:01.464659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.583 [2024-10-15 02:17:01.464680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:52.583 [2024-10-15 02:17:01.464691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.583 [2024-10-15 02:17:01.464701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.583 [2024-10-15 02:17:01.464792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.583 [2024-10-15 02:17:01.464811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:52.583 [2024-10-15 02:17:01.464846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.583 [2024-10-15 02:17:01.464858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.583 [2024-10-15 02:17:01.464882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.583 [2024-10-15 02:17:01.464902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:52.583 [2024-10-15 02:17:01.464912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.583 [2024-10-15 02:17:01.464923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.583 [2024-10-15 02:17:01.548998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.583 [2024-10-15 02:17:01.549051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:52.583 [2024-10-15 02:17:01.549068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.583 [2024-10-15 02:17:01.549078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.842 [2024-10-15 02:17:01.620875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.842 [2024-10-15 02:17:01.621075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:52.842 [2024-10-15 02:17:01.621103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.842 [2024-10-15 02:17:01.621115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.842 [2024-10-15 02:17:01.621188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.842 [2024-10-15 02:17:01.621206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:52.842 [2024-10-15 02:17:01.621217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.842 [2024-10-15 02:17:01.621228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.842 [2024-10-15 02:17:01.621297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.842 [2024-10-15 02:17:01.621313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:52.842 [2024-10-15 02:17:01.621332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.842 [2024-10-15 02:17:01.621343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.842 [2024-10-15 02:17:01.621533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.842 [2024-10-15 02:17:01.621554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:52.842 [2024-10-15 02:17:01.621567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.842 [2024-10-15 02:17:01.621578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.842 [2024-10-15 02:17:01.621658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.842 [2024-10-15 02:17:01.621675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:52.842 [2024-10-15 02:17:01.621688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.842 [2024-10-15 02:17:01.621705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.842 [2024-10-15 02:17:01.621751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.842 [2024-10-15 02:17:01.621766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:52.842 [2024-10-15 02:17:01.621778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.842 [2024-10-15 02:17:01.621789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.842 [2024-10-15 02:17:01.621879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:52.842 [2024-10-15 02:17:01.621900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:52.842 [2024-10-15 02:17:01.621918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:52.842 [2024-10-15 02:17:01.621928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.842 [2024-10-15 02:17:01.622060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.103 ms, result 0 00:47:52.842 [2024-10-15 02:17:01.623157] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:47:52.842 [2024-10-15 02:17:01.626083] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:47:53.778 00:47:53.778 00:47:53.778 02:17:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:47:55.682 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:47:55.682 02:17:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:47:55.682 [2024-10-15 02:17:04.447679] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:47:55.682 [2024-10-15 02:17:04.447863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81046 ] 00:47:55.682 [2024-10-15 02:17:04.622718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:55.941 [2024-10-15 02:17:04.871471] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:47:56.200 [2024-10-15 02:17:05.174448] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:56.200 [2024-10-15 02:17:05.174723] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:56.461 [2024-10-15 02:17:05.320422] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:47:56.461 [2024-10-15 02:17:05.333135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.333176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:56.461 [2024-10-15 02:17:05.333195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:56.461 [2024-10-15 02:17:05.333210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.333267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.333284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:56.461 [2024-10-15 02:17:05.333296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:47:56.461 [2024-10-15 02:17:05.333305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.333332] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:56.461 [2024-10-15 02:17:05.334188] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:56.461 [2024-10-15 02:17:05.334240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.334252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:56.461 [2024-10-15 02:17:05.334264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:47:56.461 [2024-10-15 02:17:05.334274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.336284] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:56.461 [2024-10-15 02:17:05.349752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.349790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:56.461 [2024-10-15 02:17:05.349806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.469 ms 00:47:56.461 [2024-10-15 02:17:05.349815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.349876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.349894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:56.461 [2024-10-15 02:17:05.349906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:47:56.461 [2024-10-15 02:17:05.349916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.358256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.358293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:56.461 [2024-10-15 02:17:05.358306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.276 ms 00:47:56.461 [2024-10-15 02:17:05.358316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.358398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.358433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:56.461 [2024-10-15 02:17:05.358446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:47:56.461 [2024-10-15 02:17:05.358456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.358569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.358587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:56.461 [2024-10-15 02:17:05.358599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:47:56.461 [2024-10-15 02:17:05.358610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.358642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:56.461 [2024-10-15 02:17:05.362854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.362886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:56.461 [2024-10-15 02:17:05.362900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:47:56.461 [2024-10-15 02:17:05.362910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.362964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.362980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:56.461 [2024-10-15 02:17:05.362991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:47:56.461 [2024-10-15 02:17:05.363006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.363045] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:56.461 [2024-10-15 02:17:05.363072] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:47:56.461 [2024-10-15 02:17:05.363108] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:56.461 [2024-10-15 02:17:05.363125] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:47:56.461 [2024-10-15 02:17:05.363214] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:47:56.461 [2024-10-15 02:17:05.363228] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:56.461 [2024-10-15 02:17:05.363246] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:47:56.461 [2024-10-15 02:17:05.363259] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:56.461 [2024-10-15 02:17:05.363271] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:56.461 [2024-10-15 02:17:05.363281] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:56.461 [2024-10-15 02:17:05.363291] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:56.461 [2024-10-15 02:17:05.363301] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:47:56.461 [2024-10-15 02:17:05.363311] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:47:56.461 [2024-10-15 02:17:05.363322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.363332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:56.461 [2024-10-15 02:17:05.363342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:47:56.461 [2024-10-15 02:17:05.363352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.363467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.461 [2024-10-15 02:17:05.363485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:56.461 [2024-10-15 02:17:05.363496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:47:56.461 [2024-10-15 02:17:05.363506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.461 [2024-10-15 02:17:05.363609] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:56.461 [2024-10-15 02:17:05.363629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:56.461 [2024-10-15 02:17:05.363641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:56.461 [2024-10-15 02:17:05.363652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:56.461 [2024-10-15 02:17:05.363663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:56.461 [2024-10-15 02:17:05.363671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:56.461 [2024-10-15 02:17:05.363681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:56.461 [2024-10-15 02:17:05.363690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:56.461 [2024-10-15 02:17:05.363700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:56.461 [2024-10-15 02:17:05.363710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:56.461 [2024-10-15 02:17:05.363718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:56.461 [2024-10-15 02:17:05.363727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:56.461 [2024-10-15 02:17:05.363747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:56.461 [2024-10-15 02:17:05.363758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:56.461 [2024-10-15 02:17:05.363767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:47:56.461 [2024-10-15 02:17:05.363776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:56.461 [2024-10-15 02:17:05.363786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:56.462 [2024-10-15 02:17:05.363795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:47:56.462 [2024-10-15 02:17:05.363804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:56.462 [2024-10-15 02:17:05.363827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:56.462 [2024-10-15 02:17:05.363838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:56.462 [2024-10-15 02:17:05.363847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:56.462 [2024-10-15 02:17:05.363855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:56.462 [2024-10-15 02:17:05.363864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:56.462 [2024-10-15 02:17:05.363872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:56.462 [2024-10-15 02:17:05.363881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:56.462 [2024-10-15 02:17:05.363889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:56.462 [2024-10-15 02:17:05.363898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:56.462 [2024-10-15 02:17:05.363906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:56.462 [2024-10-15 02:17:05.363915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:47:56.462 [2024-10-15 02:17:05.363923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:56.462 [2024-10-15 02:17:05.363931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:56.462 [2024-10-15 02:17:05.363940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:47:56.462 [2024-10-15 02:17:05.363948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:56.462 [2024-10-15 02:17:05.363958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:56.462 [2024-10-15 02:17:05.363966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:47:56.462 [2024-10-15 02:17:05.363975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:56.462 [2024-10-15 02:17:05.363983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:47:56.462 [2024-10-15 02:17:05.363991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:47:56.462 [2024-10-15 02:17:05.364000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:56.462 [2024-10-15 02:17:05.364008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:47:56.462 [2024-10-15 02:17:05.364016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:47:56.462 [2024-10-15 02:17:05.364025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:56.462 [2024-10-15 02:17:05.364034] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:56.462 [2024-10-15 02:17:05.364048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:56.462 [2024-10-15 02:17:05.364058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:56.462 [2024-10-15 02:17:05.364068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:56.462 [2024-10-15 02:17:05.364078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:56.462 [2024-10-15 02:17:05.364087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:56.462 [2024-10-15 02:17:05.364098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:56.462 [2024-10-15 02:17:05.364107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:56.462 [2024-10-15 02:17:05.364116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:56.462 [2024-10-15 02:17:05.364124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:56.462 [2024-10-15 02:17:05.364134] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:56.462 [2024-10-15 02:17:05.364146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:56.462 [2024-10-15 02:17:05.364156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:56.462 [2024-10-15 02:17:05.364166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:47:56.462 [2024-10-15 02:17:05.364176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:47:56.462 [2024-10-15 02:17:05.364185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:47:56.462 [2024-10-15 02:17:05.364194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:47:56.462 [2024-10-15 02:17:05.364203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:47:56.462 [2024-10-15 02:17:05.364212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:47:56.462 [2024-10-15 02:17:05.364228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:47:56.462 [2024-10-15 02:17:05.364238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:47:56.462 [2024-10-15 02:17:05.364248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:47:56.462 [2024-10-15 02:17:05.364257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:47:56.462 [2024-10-15 02:17:05.364266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:47:56.462 [2024-10-15 02:17:05.364275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:47:56.462 [2024-10-15 02:17:05.364285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:47:56.462 [2024-10-15 02:17:05.364294] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:56.462 [2024-10-15 02:17:05.364306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:56.462 [2024-10-15 02:17:05.364317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:56.462 [2024-10-15 02:17:05.364327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:56.462 [2024-10-15 02:17:05.364336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:56.462 [2024-10-15 02:17:05.364346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:56.462 [2024-10-15 02:17:05.364356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.462 [2024-10-15 02:17:05.364365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:56.462 [2024-10-15 02:17:05.364381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:47:56.462 [2024-10-15 02:17:05.364392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.462 [2024-10-15 02:17:05.409938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.462 [2024-10-15 02:17:05.409994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:56.462 [2024-10-15 02:17:05.410012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.472 ms 00:47:56.462 [2024-10-15 02:17:05.410029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.462 [2024-10-15 02:17:05.410138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.462 [2024-10-15 02:17:05.410154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:56.462 [2024-10-15 02:17:05.410166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:47:56.462 [2024-10-15 02:17:05.410176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.462 [2024-10-15 02:17:05.446944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.462 [2024-10-15 02:17:05.446990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:56.462 [2024-10-15 02:17:05.447006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.685 ms 00:47:56.462 [2024-10-15 02:17:05.447016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.462 [2024-10-15 02:17:05.447064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.462 [2024-10-15 02:17:05.447079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:56.462 [2024-10-15 02:17:05.447091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:47:56.462 [2024-10-15 02:17:05.447100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.462 [2024-10-15 02:17:05.447725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.462 [2024-10-15 02:17:05.447749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:56.462 [2024-10-15 02:17:05.447770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:47:56.462 [2024-10-15 02:17:05.447781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.462 [2024-10-15 02:17:05.447947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.462 [2024-10-15 02:17:05.447965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:56.462 [2024-10-15 02:17:05.447977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:47:56.462 [2024-10-15 02:17:05.447987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.462 [2024-10-15 02:17:05.463332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.462 [2024-10-15 02:17:05.463493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:56.462 [2024-10-15 02:17:05.463639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.304 ms 00:47:56.462 [2024-10-15 02:17:05.463686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.722 [2024-10-15 02:17:05.478426] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:47:56.722 [2024-10-15 02:17:05.478611] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:56.722 [2024-10-15 02:17:05.478772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.722 [2024-10-15 02:17:05.478945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:56.722 [2024-10-15 02:17:05.478969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.915 ms 00:47:56.722 [2024-10-15 02:17:05.478997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.722 [2024-10-15 02:17:05.502446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.502503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:56.723 [2024-10-15 02:17:05.502528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.404 ms 00:47:56.723 [2024-10-15 02:17:05.502541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.514862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.514899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:56.723 [2024-10-15 02:17:05.514914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.272 ms 00:47:56.723 [2024-10-15 02:17:05.514937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.527033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.527072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:56.723 [2024-10-15 02:17:05.527086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.057 ms 00:47:56.723 [2024-10-15 02:17:05.527095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.527801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.527852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:56.723 [2024-10-15 02:17:05.527866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:47:56.723 [2024-10-15 02:17:05.527876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.591277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.591350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:56.723 [2024-10-15 02:17:05.591368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.377 ms 00:47:56.723 [2024-10-15 02:17:05.591379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.601260] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:56.723 [2024-10-15 02:17:05.603407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.603446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:56.723 [2024-10-15 02:17:05.603466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.943 ms 00:47:56.723 [2024-10-15 02:17:05.603476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.603565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.603583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:56.723 [2024-10-15 02:17:05.603595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:47:56.723 [2024-10-15 02:17:05.603605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.604570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.604604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:56.723 [2024-10-15 02:17:05.604618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:47:56.723 [2024-10-15 02:17:05.604635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.604663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.604678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:56.723 [2024-10-15 02:17:05.604689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:47:56.723 [2024-10-15 02:17:05.604699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.604740] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:56.723 [2024-10-15 02:17:05.604757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.604767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:56.723 [2024-10-15 02:17:05.604783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:47:56.723 [2024-10-15 02:17:05.604794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.629564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.629606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:56.723 [2024-10-15 02:17:05.629622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.745 ms 00:47:56.723 [2024-10-15 02:17:05.629633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.629709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.723 [2024-10-15 02:17:05.629726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:56.723 [2024-10-15 02:17:05.629738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:47:56.723 [2024-10-15 02:17:05.629751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.723 [2024-10-15 02:17:05.631269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 297.494 ms, result 0 00:47:58.103  [2024-10-15T02:17:08.053Z] Copying: 23/1024 [MB] (23 MBps) [2024-10-15T02:17:08.991Z] Copying: 46/1024 [MB] (23 MBps) [2024-10-15T02:17:09.928Z] Copying: 70/1024 [MB] (23 MBps) [2024-10-15T02:17:10.865Z] Copying: 93/1024 [MB] (22 MBps) [2024-10-15T02:17:11.804Z] Copying: 116/1024 [MB] (22 MBps) [2024-10-15T02:17:13.232Z] Copying: 139/1024 [MB] (22 MBps) [2024-10-15T02:17:13.799Z] Copying: 162/1024 [MB] (22 MBps) [2024-10-15T02:17:15.176Z] Copying: 185/1024 [MB] (22 MBps) [2024-10-15T02:17:16.112Z] Copying: 207/1024 [MB] (22 MBps) [2024-10-15T02:17:17.047Z] Copying: 230/1024 [MB] (22 MBps) [2024-10-15T02:17:17.991Z] Copying: 254/1024 [MB] (23 MBps) [2024-10-15T02:17:18.926Z] Copying: 277/1024 [MB] (23 MBps) [2024-10-15T02:17:19.861Z] Copying: 300/1024 [MB] (23 MBps) [2024-10-15T02:17:20.798Z] Copying: 324/1024 [MB] (23 MBps) [2024-10-15T02:17:22.175Z] Copying: 347/1024 [MB] (23 MBps) [2024-10-15T02:17:23.111Z] Copying: 370/1024 [MB] (22 MBps) [2024-10-15T02:17:24.047Z] Copying: 393/1024 [MB] (23 MBps) [2024-10-15T02:17:24.985Z] Copying: 417/1024 [MB] (23 MBps) [2024-10-15T02:17:25.919Z] Copying: 440/1024 [MB] (23 MBps) [2024-10-15T02:17:26.855Z] Copying: 464/1024 [MB] (23 MBps) [2024-10-15T02:17:28.232Z] Copying: 487/1024 [MB] (23 MBps) [2024-10-15T02:17:28.799Z] Copying: 510/1024 [MB] (23 MBps) [2024-10-15T02:17:30.176Z] Copying: 533/1024 [MB] (22 MBps) [2024-10-15T02:17:31.147Z] Copying: 557/1024 [MB] (24 MBps) [2024-10-15T02:17:32.083Z] Copying: 583/1024 [MB] (25 MBps) [2024-10-15T02:17:33.018Z] Copying: 609/1024 [MB] (25 MBps) [2024-10-15T02:17:33.955Z] Copying: 634/1024 [MB] (25 MBps) [2024-10-15T02:17:34.891Z] Copying: 660/1024 [MB] (25 MBps) [2024-10-15T02:17:35.826Z] Copying: 685/1024 [MB] (25 MBps) [2024-10-15T02:17:37.202Z] Copying: 711/1024 [MB] (25 MBps) [2024-10-15T02:17:38.147Z] Copying: 736/1024 [MB] (25 MBps) [2024-10-15T02:17:39.084Z] Copying: 762/1024 [MB] (25 MBps) [2024-10-15T02:17:40.020Z] Copying: 787/1024 [MB] (25 MBps) [2024-10-15T02:17:40.956Z] Copying: 813/1024 [MB] (25 MBps) [2024-10-15T02:17:41.891Z] Copying: 838/1024 [MB] (25 MBps) [2024-10-15T02:17:42.825Z] Copying: 864/1024 [MB] (25 MBps) [2024-10-15T02:17:44.202Z] Copying: 890/1024 [MB] (25 MBps) [2024-10-15T02:17:45.138Z] Copying: 915/1024 [MB] (25 MBps) [2024-10-15T02:17:46.072Z] Copying: 941/1024 [MB] (25 MBps) [2024-10-15T02:17:47.008Z] Copying: 967/1024 [MB] (25 MBps) [2024-10-15T02:17:47.943Z] Copying: 993/1024 [MB] (25 MBps) [2024-10-15T02:17:48.201Z] Copying: 1018/1024 [MB] (25 MBps) [2024-10-15T02:17:48.460Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-10-15 02:17:48.222501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.448 [2024-10-15 02:17:48.222651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:48:39.448 [2024-10-15 02:17:48.222690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:48:39.448 [2024-10-15 02:17:48.222724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.448 [2024-10-15 02:17:48.222778] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:48:39.448 [2024-10-15 02:17:48.226743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.448 [2024-10-15 02:17:48.226789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:48:39.448 [2024-10-15 02:17:48.226809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.927 ms 00:48:39.448 [2024-10-15 02:17:48.226823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.448 [2024-10-15 02:17:48.227156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.448 [2024-10-15 02:17:48.227184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:48:39.448 [2024-10-15 02:17:48.227200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:48:39.448 [2024-10-15 02:17:48.227223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.448 [2024-10-15 02:17:48.230163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.448 [2024-10-15 02:17:48.230363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:48:39.448 [2024-10-15 02:17:48.230394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:48:39.448 [2024-10-15 02:17:48.230410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.449 [2024-10-15 02:17:48.235971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.449 [2024-10-15 02:17:48.236010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:48:39.449 [2024-10-15 02:17:48.236028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.513 ms 00:48:39.449 [2024-10-15 02:17:48.236040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.449 [2024-10-15 02:17:48.263053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.449 [2024-10-15 02:17:48.263100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:48:39.449 [2024-10-15 02:17:48.263119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.930 ms 00:48:39.449 [2024-10-15 02:17:48.263132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.449 [2024-10-15 02:17:48.278809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.449 [2024-10-15 02:17:48.278853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:48:39.449 [2024-10-15 02:17:48.278872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.631 ms 00:48:39.449 [2024-10-15 02:17:48.278884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.449 [2024-10-15 02:17:48.280848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.449 [2024-10-15 02:17:48.280889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:48:39.449 [2024-10-15 02:17:48.280923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.933 ms 00:48:39.449 [2024-10-15 02:17:48.280936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.449 [2024-10-15 02:17:48.305957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.449 [2024-10-15 02:17:48.306000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:48:39.449 [2024-10-15 02:17:48.306018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.997 ms 00:48:39.449 [2024-10-15 02:17:48.306029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.449 [2024-10-15 02:17:48.330457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.449 [2024-10-15 02:17:48.330653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:48:39.449 [2024-10-15 02:17:48.330683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.386 ms 00:48:39.449 [2024-10-15 02:17:48.330696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.449 [2024-10-15 02:17:48.354726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.449 [2024-10-15 02:17:48.354769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:48:39.449 [2024-10-15 02:17:48.354802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.965 ms 00:48:39.449 [2024-10-15 02:17:48.354814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.449 [2024-10-15 02:17:48.379376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.449 [2024-10-15 02:17:48.379468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:48:39.449 [2024-10-15 02:17:48.379489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.491 ms 00:48:39.449 [2024-10-15 02:17:48.379501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.449 [2024-10-15 02:17:48.379545] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:48:39.449 [2024-10-15 02:17:48.379573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:48:39.449 [2024-10-15 02:17:48.379589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:48:39.449 [2024-10-15 02:17:48.379603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.379989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:48:39.449 [2024-10-15 02:17:48.380341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:48:39.450 [2024-10-15 02:17:48.380892] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:48:39.450 [2024-10-15 02:17:48.380904] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 429ddebc-ad3c-4865-b14d-60695ae1c9ae 00:48:39.450 [2024-10-15 02:17:48.380917] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:48:39.450 [2024-10-15 02:17:48.380928] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:48:39.450 [2024-10-15 02:17:48.380942] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:48:39.450 [2024-10-15 02:17:48.380954] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:48:39.450 [2024-10-15 02:17:48.380967] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:48:39.450 [2024-10-15 02:17:48.380988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:48:39.450 [2024-10-15 02:17:48.381000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:48:39.450 [2024-10-15 02:17:48.381010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:48:39.450 [2024-10-15 02:17:48.381021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:48:39.450 [2024-10-15 02:17:48.381047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.450 [2024-10-15 02:17:48.381060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:48:39.450 [2024-10-15 02:17:48.381073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:48:39.450 [2024-10-15 02:17:48.381086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.450 [2024-10-15 02:17:48.395832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.450 [2024-10-15 02:17:48.395874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:48:39.450 [2024-10-15 02:17:48.395918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.721 ms 00:48:39.450 [2024-10-15 02:17:48.395931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.450 [2024-10-15 02:17:48.396398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.450 [2024-10-15 02:17:48.396435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:48:39.450 [2024-10-15 02:17:48.396455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:48:39.450 [2024-10-15 02:17:48.396469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.450 [2024-10-15 02:17:48.429974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.450 [2024-10-15 02:17:48.430027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:39.450 [2024-10-15 02:17:48.430062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.450 [2024-10-15 02:17:48.430076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.450 [2024-10-15 02:17:48.430143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.450 [2024-10-15 02:17:48.430162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:39.450 [2024-10-15 02:17:48.430175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.450 [2024-10-15 02:17:48.430187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.450 [2024-10-15 02:17:48.430282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.450 [2024-10-15 02:17:48.430304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:39.450 [2024-10-15 02:17:48.430327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.450 [2024-10-15 02:17:48.430340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.450 [2024-10-15 02:17:48.430367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.450 [2024-10-15 02:17:48.430383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:39.450 [2024-10-15 02:17:48.430396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.450 [2024-10-15 02:17:48.430408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.713 [2024-10-15 02:17:48.521716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.713 [2024-10-15 02:17:48.522078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:39.713 [2024-10-15 02:17:48.522113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.713 [2024-10-15 02:17:48.522129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.713 [2024-10-15 02:17:48.595915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.713 [2024-10-15 02:17:48.595997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:39.713 [2024-10-15 02:17:48.596021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.713 [2024-10-15 02:17:48.596035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.713 [2024-10-15 02:17:48.596172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.713 [2024-10-15 02:17:48.596193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:39.713 [2024-10-15 02:17:48.596208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.713 [2024-10-15 02:17:48.596231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.713 [2024-10-15 02:17:48.596283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.713 [2024-10-15 02:17:48.596301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:39.713 [2024-10-15 02:17:48.596315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.713 [2024-10-15 02:17:48.596328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.713 [2024-10-15 02:17:48.596492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.713 [2024-10-15 02:17:48.596516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:39.713 [2024-10-15 02:17:48.596531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.713 [2024-10-15 02:17:48.596544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.713 [2024-10-15 02:17:48.596636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.713 [2024-10-15 02:17:48.596657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:48:39.713 [2024-10-15 02:17:48.596671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.713 [2024-10-15 02:17:48.596684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.713 [2024-10-15 02:17:48.596744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.713 [2024-10-15 02:17:48.596762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:39.714 [2024-10-15 02:17:48.596776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.714 [2024-10-15 02:17:48.596789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.714 [2024-10-15 02:17:48.596881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:39.714 [2024-10-15 02:17:48.596902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:39.714 [2024-10-15 02:17:48.596917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:39.714 [2024-10-15 02:17:48.596930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.714 [2024-10-15 02:17:48.597110] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 374.579 ms, result 0 00:48:39.714 [2024-10-15 02:17:48.598288] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001992ada0 was disconnected and freed. delete nvme_qpair. 00:48:39.714 [2024-10-15 02:17:48.601670] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x20001a106720 was disconnected and freed. delete nvme_qpair. 00:48:40.649 00:48:40.649 00:48:40.649 02:17:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:48:42.551 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:48:42.551 02:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:48:42.551 02:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:48:42.551 02:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:48:42.551 02:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:48:42.809 02:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:48:42.809 02:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:48:42.809 02:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:48:42.809 02:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79074 00:48:42.809 02:17:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 79074 ']' 00:48:42.809 Process with pid 79074 is not found 00:48:42.809 02:17:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 79074 00:48:42.809 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79074) - No such process 00:48:42.809 02:17:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 79074 is not found' 00:48:42.809 02:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:48:43.068 02:17:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:48:43.068 Remove shared memory files 00:48:43.068 02:17:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:48:43.068 02:17:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:48:43.068 02:17:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:48:43.068 02:17:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:48:43.068 02:17:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:48:43.068 02:17:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:48:43.068 00:48:43.068 real 4m0.467s 00:48:43.068 user 4m38.095s 00:48:43.068 sys 0m34.515s 00:48:43.068 02:17:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:43.068 02:17:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:43.068 ************************************ 00:48:43.068 END TEST ftl_dirty_shutdown 00:48:43.068 ************************************ 00:48:43.327 02:17:52 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:48:43.327 02:17:52 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:48:43.327 02:17:52 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:43.327 02:17:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:48:43.327 ************************************ 00:48:43.327 START TEST ftl_upgrade_shutdown 00:48:43.327 ************************************ 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:48:43.327 * Looking for test storage... 00:48:43.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:48:43.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:43.327 --rc genhtml_branch_coverage=1 00:48:43.327 --rc genhtml_function_coverage=1 00:48:43.327 --rc genhtml_legend=1 00:48:43.327 --rc geninfo_all_blocks=1 00:48:43.327 --rc geninfo_unexecuted_blocks=1 00:48:43.327 00:48:43.327 ' 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:48:43.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:43.327 --rc genhtml_branch_coverage=1 00:48:43.327 --rc genhtml_function_coverage=1 00:48:43.327 --rc genhtml_legend=1 00:48:43.327 --rc geninfo_all_blocks=1 00:48:43.327 --rc geninfo_unexecuted_blocks=1 00:48:43.327 00:48:43.327 ' 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:48:43.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:43.327 --rc genhtml_branch_coverage=1 00:48:43.327 --rc genhtml_function_coverage=1 00:48:43.327 --rc genhtml_legend=1 00:48:43.327 --rc geninfo_all_blocks=1 00:48:43.327 --rc geninfo_unexecuted_blocks=1 00:48:43.327 00:48:43.327 ' 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:48:43.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:43.327 --rc genhtml_branch_coverage=1 00:48:43.327 --rc genhtml_function_coverage=1 00:48:43.327 --rc genhtml_legend=1 00:48:43.327 --rc geninfo_all_blocks=1 00:48:43.327 --rc geninfo_unexecuted_blocks=1 00:48:43.327 00:48:43.327 ' 00:48:43.327 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81593 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81593 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81593 ']' 00:48:43.586 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:43.587 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:48:43.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:43.587 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:43.587 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:48:43.587 02:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:43.587 [2024-10-15 02:17:52.494871] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:48:43.587 [2024-10-15 02:17:52.495052] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81593 ] 00:48:43.845 [2024-10-15 02:17:52.670003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:43.846 [2024-10-15 02:17:52.855552] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:48:44.782 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:48:45.041 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:48:45.041 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:48:45.041 02:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:48:45.041 02:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:48:45.041 02:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:48:45.041 02:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:48:45.041 02:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:48:45.041 02:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:48:45.300 02:17:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:48:45.300 { 00:48:45.300 "name": "basen1", 00:48:45.300 "aliases": [ 00:48:45.300 "0cbee04a-8ef5-40fe-af45-5f476be90549" 00:48:45.300 ], 00:48:45.300 "product_name": "NVMe disk", 00:48:45.300 "block_size": 4096, 00:48:45.300 "num_blocks": 1310720, 00:48:45.300 "uuid": "0cbee04a-8ef5-40fe-af45-5f476be90549", 00:48:45.300 "numa_id": -1, 00:48:45.300 "assigned_rate_limits": { 00:48:45.300 "rw_ios_per_sec": 0, 00:48:45.300 "rw_mbytes_per_sec": 0, 00:48:45.300 "r_mbytes_per_sec": 0, 00:48:45.300 "w_mbytes_per_sec": 0 00:48:45.300 }, 00:48:45.300 "claimed": true, 00:48:45.300 "claim_type": "read_many_write_one", 00:48:45.300 "zoned": false, 00:48:45.300 "supported_io_types": { 00:48:45.300 "read": true, 00:48:45.300 "write": true, 00:48:45.300 "unmap": true, 00:48:45.300 "flush": true, 00:48:45.300 "reset": true, 00:48:45.300 "nvme_admin": true, 00:48:45.300 "nvme_io": true, 00:48:45.300 "nvme_io_md": false, 00:48:45.300 "write_zeroes": true, 00:48:45.300 "zcopy": false, 00:48:45.300 "get_zone_info": false, 00:48:45.300 "zone_management": false, 00:48:45.300 "zone_append": false, 00:48:45.300 "compare": true, 00:48:45.300 "compare_and_write": false, 00:48:45.300 "abort": true, 00:48:45.300 "seek_hole": false, 00:48:45.300 "seek_data": false, 00:48:45.300 "copy": true, 00:48:45.300 "nvme_iov_md": false 00:48:45.300 }, 00:48:45.300 "driver_specific": { 00:48:45.300 "nvme": [ 00:48:45.300 { 00:48:45.300 "pci_address": "0000:00:11.0", 00:48:45.300 "trid": { 00:48:45.300 "trtype": "PCIe", 00:48:45.300 "traddr": "0000:00:11.0" 00:48:45.300 }, 00:48:45.300 "ctrlr_data": { 00:48:45.300 "cntlid": 0, 00:48:45.300 "vendor_id": "0x1b36", 00:48:45.300 "model_number": "QEMU NVMe Ctrl", 00:48:45.300 "serial_number": "12341", 00:48:45.300 "firmware_revision": "8.0.0", 00:48:45.301 "subnqn": "nqn.2019-08.org.qemu:12341", 00:48:45.301 "oacs": { 00:48:45.301 "security": 0, 00:48:45.301 "format": 1, 00:48:45.301 "firmware": 0, 00:48:45.301 "ns_manage": 1 00:48:45.301 }, 00:48:45.301 "multi_ctrlr": false, 00:48:45.301 "ana_reporting": false 00:48:45.301 }, 00:48:45.301 "vs": { 00:48:45.301 "nvme_version": "1.4" 00:48:45.301 }, 00:48:45.301 "ns_data": { 00:48:45.301 "id": 1, 00:48:45.301 "can_share": false 00:48:45.301 } 00:48:45.301 } 00:48:45.301 ], 00:48:45.301 "mp_policy": "active_passive" 00:48:45.301 } 00:48:45.301 } 00:48:45.301 ]' 00:48:45.301 02:17:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:48:45.301 02:17:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:48:45.301 02:17:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:48:45.301 02:17:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:48:45.301 02:17:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:48:45.301 02:17:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:48:45.301 02:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:48:45.301 02:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:48:45.301 02:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:48:45.301 02:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:48:45.301 02:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:48:45.561 02:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=39e63fba-3b4e-4a96-96fe-1cad4fb6db04 00:48:45.561 02:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:48:45.561 02:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39e63fba-3b4e-4a96-96fe-1cad4fb6db04 00:48:45.839 [2024-10-15 02:17:54.818176] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035015720 was disconnected and freed. delete nvme_qpair. 00:48:45.839 02:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:48:46.107 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=c5eca432-877f-4799-ad97-ced9694a46f3 00:48:46.107 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u c5eca432-877f-4799-ad97-ced9694a46f3 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=6ff9a85f-d194-456f-ab78-2f42a9a93377 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 6ff9a85f-d194-456f-ab78-2f42a9a93377 ]] 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 6ff9a85f-d194-456f-ab78-2f42a9a93377 5120 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=6ff9a85f-d194-456f-ab78-2f42a9a93377 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6ff9a85f-d194-456f-ab78-2f42a9a93377 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=6ff9a85f-d194-456f-ab78-2f42a9a93377 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:48:46.390 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ff9a85f-d194-456f-ab78-2f42a9a93377 00:48:46.652 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:48:46.652 { 00:48:46.652 "name": "6ff9a85f-d194-456f-ab78-2f42a9a93377", 00:48:46.652 "aliases": [ 00:48:46.652 "lvs/basen1p0" 00:48:46.652 ], 00:48:46.652 "product_name": "Logical Volume", 00:48:46.652 "block_size": 4096, 00:48:46.652 "num_blocks": 5242880, 00:48:46.652 "uuid": "6ff9a85f-d194-456f-ab78-2f42a9a93377", 00:48:46.652 "assigned_rate_limits": { 00:48:46.652 "rw_ios_per_sec": 0, 00:48:46.652 "rw_mbytes_per_sec": 0, 00:48:46.652 "r_mbytes_per_sec": 0, 00:48:46.652 "w_mbytes_per_sec": 0 00:48:46.652 }, 00:48:46.652 "claimed": false, 00:48:46.652 "zoned": false, 00:48:46.652 "supported_io_types": { 00:48:46.652 "read": true, 00:48:46.652 "write": true, 00:48:46.652 "unmap": true, 00:48:46.652 "flush": false, 00:48:46.652 "reset": true, 00:48:46.652 "nvme_admin": false, 00:48:46.652 "nvme_io": false, 00:48:46.652 "nvme_io_md": false, 00:48:46.652 "write_zeroes": true, 00:48:46.652 "zcopy": false, 00:48:46.652 "get_zone_info": false, 00:48:46.652 "zone_management": false, 00:48:46.652 "zone_append": false, 00:48:46.652 "compare": false, 00:48:46.652 "compare_and_write": false, 00:48:46.652 "abort": false, 00:48:46.652 "seek_hole": true, 00:48:46.652 "seek_data": true, 00:48:46.652 "copy": false, 00:48:46.652 "nvme_iov_md": false 00:48:46.652 }, 00:48:46.652 "driver_specific": { 00:48:46.652 "lvol": { 00:48:46.652 "lvol_store_uuid": "c5eca432-877f-4799-ad97-ced9694a46f3", 00:48:46.652 "base_bdev": "basen1", 00:48:46.652 "thin_provision": true, 00:48:46.652 "num_allocated_clusters": 0, 00:48:46.652 "snapshot": false, 00:48:46.652 "clone": false, 00:48:46.652 "esnap_clone": false 00:48:46.652 } 00:48:46.652 } 00:48:46.652 } 00:48:46.652 ]' 00:48:46.652 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:48:46.910 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:48:46.910 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:48:46.910 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:48:46.910 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:48:46.910 02:17:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:48:46.910 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:48:46.910 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:48:46.910 02:17:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:48:47.169 [2024-10-15 02:17:56.036785] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:48:47.169 02:17:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:48:47.169 02:17:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:48:47.169 02:17:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:48:47.428 [2024-10-15 02:17:56.263348] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:48:47.428 02:17:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:48:47.428 02:17:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:48:47.428 02:17:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 6ff9a85f-d194-456f-ab78-2f42a9a93377 -c cachen1p0 --l2p_dram_limit 2 00:48:47.687 [2024-10-15 02:17:56.469358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.687 [2024-10-15 02:17:56.469439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:48:47.687 [2024-10-15 02:17:56.469461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:48:47.687 [2024-10-15 02:17:56.469477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.687 [2024-10-15 02:17:56.469537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.687 [2024-10-15 02:17:56.469559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:48:47.687 [2024-10-15 02:17:56.469572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:48:47.687 [2024-10-15 02:17:56.469589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.687 [2024-10-15 02:17:56.469619] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:48:47.687 [2024-10-15 02:17:56.470519] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:48:47.687 [2024-10-15 02:17:56.470613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.687 [2024-10-15 02:17:56.470632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:48:47.687 [2024-10-15 02:17:56.470657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.000 ms 00:48:47.687 [2024-10-15 02:17:56.470673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.687 [2024-10-15 02:17:56.470816] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 42fc2f33-d538-40a1-b179-49da2798d81c 00:48:47.687 [2024-10-15 02:17:56.472681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.687 [2024-10-15 02:17:56.472734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:48:47.687 [2024-10-15 02:17:56.472769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:48:47.687 [2024-10-15 02:17:56.472781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.687 [2024-10-15 02:17:56.482265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.688 [2024-10-15 02:17:56.482306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:48:47.688 [2024-10-15 02:17:56.482325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.423 ms 00:48:47.688 [2024-10-15 02:17:56.482337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.688 [2024-10-15 02:17:56.482397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.688 [2024-10-15 02:17:56.482427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:48:47.688 [2024-10-15 02:17:56.482444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:48:47.688 [2024-10-15 02:17:56.482461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.688 [2024-10-15 02:17:56.482582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.688 [2024-10-15 02:17:56.482601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:48:47.688 [2024-10-15 02:17:56.482617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:48:47.688 [2024-10-15 02:17:56.482646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.688 [2024-10-15 02:17:56.482696] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:48:47.688 [2024-10-15 02:17:56.487303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.688 [2024-10-15 02:17:56.487359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:48:47.688 [2024-10-15 02:17:56.487375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.619 ms 00:48:47.688 [2024-10-15 02:17:56.487392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.688 [2024-10-15 02:17:56.487435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.688 [2024-10-15 02:17:56.487456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:48:47.688 [2024-10-15 02:17:56.487472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:48:47.688 [2024-10-15 02:17:56.487488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.688 [2024-10-15 02:17:56.487532] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:48:47.688 [2024-10-15 02:17:56.487717] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:48:47.688 [2024-10-15 02:17:56.487738] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:48:47.688 [2024-10-15 02:17:56.487760] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:48:47.688 [2024-10-15 02:17:56.487775] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:48:47.688 [2024-10-15 02:17:56.487791] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:48:47.688 [2024-10-15 02:17:56.487805] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:48:47.688 [2024-10-15 02:17:56.487820] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:48:47.688 [2024-10-15 02:17:56.487832] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:48:47.688 [2024-10-15 02:17:56.487846] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:48:47.688 [2024-10-15 02:17:56.487859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.688 [2024-10-15 02:17:56.487872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:48:47.688 [2024-10-15 02:17:56.487898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.328 ms 00:48:47.688 [2024-10-15 02:17:56.487914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.688 [2024-10-15 02:17:56.488006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.688 [2024-10-15 02:17:56.488027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:48:47.688 [2024-10-15 02:17:56.488040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:48:47.688 [2024-10-15 02:17:56.488055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.688 [2024-10-15 02:17:56.488153] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:48:47.688 [2024-10-15 02:17:56.488183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:48:47.688 [2024-10-15 02:17:56.488199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:48:47.688 [2024-10-15 02:17:56.488214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:48:47.688 [2024-10-15 02:17:56.488240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:48:47.688 [2024-10-15 02:17:56.488267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:48:47.688 [2024-10-15 02:17:56.488278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:48:47.688 [2024-10-15 02:17:56.488292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:48:47.688 [2024-10-15 02:17:56.488316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:48:47.688 [2024-10-15 02:17:56.488327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:48:47.688 [2024-10-15 02:17:56.488356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:48:47.688 [2024-10-15 02:17:56.488369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:48:47.688 [2024-10-15 02:17:56.488394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:48:47.688 [2024-10-15 02:17:56.488442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:48:47.688 [2024-10-15 02:17:56.488474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:48:47.688 [2024-10-15 02:17:56.488490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:47.688 [2024-10-15 02:17:56.488501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:48:47.688 [2024-10-15 02:17:56.488515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:48:47.688 [2024-10-15 02:17:56.488526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:47.688 [2024-10-15 02:17:56.488540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:48:47.688 [2024-10-15 02:17:56.488552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:48:47.688 [2024-10-15 02:17:56.488566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:47.688 [2024-10-15 02:17:56.488578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:48:47.688 [2024-10-15 02:17:56.488594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:48:47.688 [2024-10-15 02:17:56.488605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:47.688 [2024-10-15 02:17:56.488619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:48:47.688 [2024-10-15 02:17:56.488630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:48:47.688 [2024-10-15 02:17:56.488644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:48:47.688 [2024-10-15 02:17:56.488677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:48:47.688 [2024-10-15 02:17:56.488688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:48:47.688 [2024-10-15 02:17:56.488714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:48:47.688 [2024-10-15 02:17:56.488769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:48:47.688 [2024-10-15 02:17:56.488780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488794] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:48:47.688 [2024-10-15 02:17:56.488808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:48:47.688 [2024-10-15 02:17:56.488826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:48:47.688 [2024-10-15 02:17:56.488838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:47.688 [2024-10-15 02:17:56.488853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:48:47.688 [2024-10-15 02:17:56.488865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:48:47.688 [2024-10-15 02:17:56.488878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:48:47.688 [2024-10-15 02:17:56.488892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:48:47.688 [2024-10-15 02:17:56.488906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:48:47.688 [2024-10-15 02:17:56.488918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:48:47.688 [2024-10-15 02:17:56.488936] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:48:47.688 [2024-10-15 02:17:56.488950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:47.688 [2024-10-15 02:17:56.488966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:48:47.688 [2024-10-15 02:17:56.488977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:48:47.688 [2024-10-15 02:17:56.488991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:48:47.688 [2024-10-15 02:17:56.489004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:48:47.688 [2024-10-15 02:17:56.489018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:48:47.688 [2024-10-15 02:17:56.489030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:48:47.688 [2024-10-15 02:17:56.489046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:48:47.688 [2024-10-15 02:17:56.489057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:48:47.688 [2024-10-15 02:17:56.489071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:48:47.688 [2024-10-15 02:17:56.489083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:48:47.688 [2024-10-15 02:17:56.489097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:48:47.688 [2024-10-15 02:17:56.489108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:48:47.688 [2024-10-15 02:17:56.489121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:48:47.688 [2024-10-15 02:17:56.489133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:48:47.688 [2024-10-15 02:17:56.489147] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:48:47.688 [2024-10-15 02:17:56.489161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:47.689 [2024-10-15 02:17:56.489178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:48:47.689 [2024-10-15 02:17:56.489189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:48:47.689 [2024-10-15 02:17:56.489203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:48:47.689 [2024-10-15 02:17:56.489215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:48:47.689 [2024-10-15 02:17:56.489230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:47.689 [2024-10-15 02:17:56.489242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:48:47.689 [2024-10-15 02:17:56.489259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.133 ms 00:48:47.689 [2024-10-15 02:17:56.489270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:47.689 [2024-10-15 02:17:56.489330] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:48:47.689 [2024-10-15 02:17:56.489347] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:48:51.879 [2024-10-15 02:18:00.658496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.658589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:48:51.879 [2024-10-15 02:18:00.658614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4169.179 ms 00:48:51.879 [2024-10-15 02:18:00.658628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.879 [2024-10-15 02:18:00.692323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.692378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:48:51.879 [2024-10-15 02:18:00.692424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.390 ms 00:48:51.879 [2024-10-15 02:18:00.692444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.879 [2024-10-15 02:18:00.692553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.692572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:48:51.879 [2024-10-15 02:18:00.692588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:48:51.879 [2024-10-15 02:18:00.692602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.879 [2024-10-15 02:18:00.738479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.738560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:48:51.879 [2024-10-15 02:18:00.738584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.787 ms 00:48:51.879 [2024-10-15 02:18:00.738598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.879 [2024-10-15 02:18:00.738649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.738665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:48:51.879 [2024-10-15 02:18:00.738681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:48:51.879 [2024-10-15 02:18:00.738692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.879 [2024-10-15 02:18:00.739335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.739364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:48:51.879 [2024-10-15 02:18:00.739385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.558 ms 00:48:51.879 [2024-10-15 02:18:00.739397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.879 [2024-10-15 02:18:00.739470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.739501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:48:51.879 [2024-10-15 02:18:00.739516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:48:51.879 [2024-10-15 02:18:00.739527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.879 [2024-10-15 02:18:00.756934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.756971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:48:51.879 [2024-10-15 02:18:00.756990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.381 ms 00:48:51.879 [2024-10-15 02:18:00.757002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.879 [2024-10-15 02:18:00.769156] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:48:51.879 [2024-10-15 02:18:00.770508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.770564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:48:51.879 [2024-10-15 02:18:00.770583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.415 ms 00:48:51.879 [2024-10-15 02:18:00.770598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.879 [2024-10-15 02:18:00.805510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.805558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:48:51.879 [2024-10-15 02:18:00.805578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.880 ms 00:48:51.879 [2024-10-15 02:18:00.805597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.879 [2024-10-15 02:18:00.805697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.805720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:48:51.879 [2024-10-15 02:18:00.805734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:48:51.879 [2024-10-15 02:18:00.805747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.879 [2024-10-15 02:18:00.830628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.879 [2024-10-15 02:18:00.830675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:48:51.880 [2024-10-15 02:18:00.830692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.821 ms 00:48:51.880 [2024-10-15 02:18:00.830707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.880 [2024-10-15 02:18:00.855544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.880 [2024-10-15 02:18:00.855589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:48:51.880 [2024-10-15 02:18:00.855605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.792 ms 00:48:51.880 [2024-10-15 02:18:00.855619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:51.880 [2024-10-15 02:18:00.856267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:51.880 [2024-10-15 02:18:00.856301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:48:51.880 [2024-10-15 02:18:00.856315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.605 ms 00:48:51.880 [2024-10-15 02:18:00.856333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:52.139 [2024-10-15 02:18:00.955851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:52.139 [2024-10-15 02:18:00.955902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:48:52.139 [2024-10-15 02:18:00.955923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.474 ms 00:48:52.139 [2024-10-15 02:18:00.955938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:52.139 [2024-10-15 02:18:00.982733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:52.139 [2024-10-15 02:18:00.982779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:48:52.139 [2024-10-15 02:18:00.982797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.707 ms 00:48:52.139 [2024-10-15 02:18:00.982825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:52.139 [2024-10-15 02:18:01.007945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:52.139 [2024-10-15 02:18:01.007990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:48:52.139 [2024-10-15 02:18:01.008006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.073 ms 00:48:52.139 [2024-10-15 02:18:01.008020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:52.139 [2024-10-15 02:18:01.033213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:52.139 [2024-10-15 02:18:01.033268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:48:52.139 [2024-10-15 02:18:01.033285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.150 ms 00:48:52.139 [2024-10-15 02:18:01.033303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:52.139 [2024-10-15 02:18:01.033351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:52.139 [2024-10-15 02:18:01.033374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:48:52.139 [2024-10-15 02:18:01.033390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:48:52.139 [2024-10-15 02:18:01.033415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:52.139 [2024-10-15 02:18:01.033516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:52.139 [2024-10-15 02:18:01.033536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:48:52.139 [2024-10-15 02:18:01.033549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:48:52.139 [2024-10-15 02:18:01.033580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:52.139 [2024-10-15 02:18:01.035012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4565.012 ms, result 0 00:48:52.139 { 00:48:52.139 "name": "ftl", 00:48:52.139 "uuid": "42fc2f33-d538-40a1-b179-49da2798d81c" 00:48:52.139 } 00:48:52.139 02:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:48:52.398 [2024-10-15 02:18:01.333902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:52.398 02:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:48:52.656 02:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:48:52.914 [2024-10-15 02:18:01.810434] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:48:52.914 02:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:48:53.172 [2024-10-15 02:18:02.032184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:53.172 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:48:53.740 Fill FTL, iteration 1 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81729 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81729 /var/tmp/spdk.tgt.sock 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81729 ']' 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:48:53.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:48:53.740 02:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:53.740 [2024-10-15 02:18:02.550786] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:48:53.740 [2024-10-15 02:18:02.550996] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81729 ] 00:48:53.740 [2024-10-15 02:18:02.715700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:54.000 [2024-10-15 02:18:02.974320] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:48:54.937 02:18:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:48:54.937 02:18:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:48:54.937 02:18:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:48:55.195 [2024-10-15 02:18:04.049040] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030280 was disconnected and freed. delete nvme_qpair. 00:48:55.196 ftln1 00:48:55.196 02:18:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:48:55.196 02:18:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81729 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81729 ']' 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81729 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81729 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:48:55.455 killing process with pid 81729 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81729' 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81729 00:48:55.455 02:18:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81729 00:48:57.990 02:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:48:57.990 02:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:48:57.990 [2024-10-15 02:18:06.498632] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:48:57.990 [2024-10-15 02:18:06.498803] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81782 ] 00:48:57.990 [2024-10-15 02:18:06.666469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:57.990 [2024-10-15 02:18:06.877650] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:48:58.558 [2024-10-15 02:18:07.280166] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030500 was disconnected and freed. delete nvme_qpair. 00:48:59.500  [2024-10-15T02:18:09.449Z] Copying: 257/1024 [MB] (257 MBps) [2024-10-15T02:18:10.385Z] Copying: 507/1024 [MB] (250 MBps) [2024-10-15T02:18:11.322Z] Copying: 762/1024 [MB] (255 MBps) [2024-10-15T02:18:11.582Z] Copying: 1015/1024 [MB] (253 MBps) [2024-10-15T02:18:11.582Z] Copying: 1024/1024 [MB] (average 253 MBps)[2024-10-15 02:18:11.343810] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030780 was disconnected and freed. delete nvme_qpair. 00:49:03.507 00:49:03.507 00:49:03.507 Calculate MD5 checksum, iteration 1 00:49:03.507 02:18:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:49:03.507 02:18:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:49:03.507 02:18:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:49:03.507 02:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:49:03.507 02:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:49:03.507 02:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:49:03.507 02:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:49:03.507 02:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:49:03.766 [2024-10-15 02:18:12.566260] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:49:03.766 [2024-10-15 02:18:12.566477] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81848 ] 00:49:03.766 [2024-10-15 02:18:12.741375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:04.024 [2024-10-15 02:18:12.945337] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:49:04.592 [2024-10-15 02:18:13.353184] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030500 was disconnected and freed. delete nvme_qpair. 00:49:05.530  [2024-10-15T02:18:15.479Z] Copying: 452/1024 [MB] (452 MBps) [2024-10-15T02:18:15.739Z] Copying: 884/1024 [MB] (432 MBps) [2024-10-15T02:18:15.739Z] Copying: 1024/1024 [MB] (average 440 MBps)[2024-10-15 02:18:15.698098] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030780 was disconnected and freed. delete nvme_qpair. 00:49:07.662 00:49:07.662 00:49:07.662 02:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:49:07.662 02:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:49:09.619 02:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:49:09.619 02:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b655bb071198f4fbccc7dab2b1fec1ba 00:49:09.619 02:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:49:09.619 02:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:49:09.619 Fill FTL, iteration 2 00:49:09.619 02:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:49:09.619 02:18:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:49:09.619 02:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:49:09.619 02:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:49:09.619 02:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:49:09.619 02:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:49:09.619 02:18:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:49:09.619 [2024-10-15 02:18:18.623776] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:49:09.619 [2024-10-15 02:18:18.623932] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81904 ] 00:49:09.877 [2024-10-15 02:18:18.784971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:10.135 [2024-10-15 02:18:18.992319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:49:10.393 [2024-10-15 02:18:19.400366] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030500 was disconnected and freed. delete nvme_qpair. 00:49:11.770  [2024-10-15T02:18:21.720Z] Copying: 252/1024 [MB] (252 MBps) [2024-10-15T02:18:22.655Z] Copying: 504/1024 [MB] (252 MBps) [2024-10-15T02:18:23.592Z] Copying: 759/1024 [MB] (255 MBps) [2024-10-15T02:18:23.592Z] Copying: 1014/1024 [MB] (255 MBps) [2024-10-15T02:18:23.592Z] Copying: 1024/1024 [MB] (average 253 MBps)[2024-10-15 02:18:23.468319] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030780 was disconnected and freed. delete nvme_qpair. 00:49:15.958 00:49:15.958 00:49:15.958 02:18:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:49:15.958 Calculate MD5 checksum, iteration 2 00:49:15.958 02:18:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:49:15.958 02:18:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:49:15.958 02:18:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:49:15.958 02:18:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:49:15.958 02:18:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:49:15.958 02:18:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:49:15.958 02:18:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:49:15.958 [2024-10-15 02:18:24.687144] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:49:15.958 [2024-10-15 02:18:24.687323] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81968 ] 00:49:15.958 [2024-10-15 02:18:24.858783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:16.217 [2024-10-15 02:18:25.061312] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:49:16.476 [2024-10-15 02:18:25.471107] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030500 was disconnected and freed. delete nvme_qpair. 00:49:17.853  [2024-10-15T02:18:27.801Z] Copying: 453/1024 [MB] (453 MBps) [2024-10-15T02:18:28.060Z] Copying: 892/1024 [MB] (439 MBps) [2024-10-15T02:18:28.317Z] Copying: 1024/1024 [MB] (average 443 MBps)[2024-10-15 02:18:28.301844] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030780 was disconnected and freed. delete nvme_qpair. 00:49:20.241 00:49:20.241 00:49:20.241 02:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:49:20.241 02:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:49:22.145 02:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:49:22.145 02:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a38ec1daf667dd9da657274657aa74e9 00:49:22.145 02:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:49:22.145 02:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:49:22.145 02:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:49:22.404 [2024-10-15 02:18:31.327343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:22.404 [2024-10-15 02:18:31.327444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:49:22.404 [2024-10-15 02:18:31.327475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:49:22.404 [2024-10-15 02:18:31.327487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:22.404 [2024-10-15 02:18:31.327531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:22.404 [2024-10-15 02:18:31.327548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:49:22.404 [2024-10-15 02:18:31.327560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:49:22.404 [2024-10-15 02:18:31.327572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:22.404 [2024-10-15 02:18:31.327597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:22.404 [2024-10-15 02:18:31.327627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:49:22.404 [2024-10-15 02:18:31.327639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:49:22.404 [2024-10-15 02:18:31.327656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:22.404 [2024-10-15 02:18:31.327731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.373 ms, result 0 00:49:22.404 true 00:49:22.404 02:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:22.662 { 00:49:22.662 "name": "ftl", 00:49:22.662 "properties": [ 00:49:22.662 { 00:49:22.662 "name": "superblock_version", 00:49:22.662 "value": 5, 00:49:22.662 "read-only": true 00:49:22.662 }, 00:49:22.662 { 00:49:22.662 "name": "base_device", 00:49:22.662 "bands": [ 00:49:22.662 { 00:49:22.662 "id": 0, 00:49:22.662 "state": "FREE", 00:49:22.662 "validity": 0.0 00:49:22.662 }, 00:49:22.662 { 00:49:22.662 "id": 1, 00:49:22.662 "state": "FREE", 00:49:22.662 "validity": 0.0 00:49:22.662 }, 00:49:22.662 { 00:49:22.662 "id": 2, 00:49:22.662 "state": "FREE", 00:49:22.662 "validity": 0.0 00:49:22.662 }, 00:49:22.662 { 00:49:22.662 "id": 3, 00:49:22.662 "state": "FREE", 00:49:22.662 "validity": 0.0 00:49:22.662 }, 00:49:22.662 { 00:49:22.662 "id": 4, 00:49:22.662 "state": "FREE", 00:49:22.662 "validity": 0.0 00:49:22.662 }, 00:49:22.662 { 00:49:22.662 "id": 5, 00:49:22.662 "state": "FREE", 00:49:22.662 "validity": 0.0 00:49:22.662 }, 00:49:22.662 { 00:49:22.662 "id": 6, 00:49:22.662 "state": "FREE", 00:49:22.662 "validity": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 7, 00:49:22.663 "state": "FREE", 00:49:22.663 "validity": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 8, 00:49:22.663 "state": "FREE", 00:49:22.663 "validity": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 9, 00:49:22.663 "state": "FREE", 00:49:22.663 "validity": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 10, 00:49:22.663 "state": "FREE", 00:49:22.663 "validity": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 11, 00:49:22.663 "state": "FREE", 00:49:22.663 "validity": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 12, 00:49:22.663 "state": "FREE", 00:49:22.663 "validity": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 13, 00:49:22.663 "state": "FREE", 00:49:22.663 "validity": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 14, 00:49:22.663 "state": "FREE", 00:49:22.663 "validity": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 15, 00:49:22.663 "state": "FREE", 00:49:22.663 "validity": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 16, 00:49:22.663 "state": "FREE", 00:49:22.663 "validity": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 17, 00:49:22.663 "state": "FREE", 00:49:22.663 "validity": 0.0 00:49:22.663 } 00:49:22.663 ], 00:49:22.663 "read-only": true 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "name": "cache_device", 00:49:22.663 "type": "bdev", 00:49:22.663 "chunks": [ 00:49:22.663 { 00:49:22.663 "id": 0, 00:49:22.663 "state": "INACTIVE", 00:49:22.663 "utilization": 0.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 1, 00:49:22.663 "state": "CLOSED", 00:49:22.663 "utilization": 1.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 2, 00:49:22.663 "state": "CLOSED", 00:49:22.663 "utilization": 1.0 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 3, 00:49:22.663 "state": "OPEN", 00:49:22.663 "utilization": 0.001953125 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "id": 4, 00:49:22.663 "state": "OPEN", 00:49:22.663 "utilization": 0.0 00:49:22.663 } 00:49:22.663 ], 00:49:22.663 "read-only": true 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "name": "verbose_mode", 00:49:22.663 "value": true, 00:49:22.663 "unit": "", 00:49:22.663 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:49:22.663 }, 00:49:22.663 { 00:49:22.663 "name": "prep_upgrade_on_shutdown", 00:49:22.663 "value": false, 00:49:22.663 "unit": "", 00:49:22.663 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:49:22.663 } 00:49:22.663 ] 00:49:22.663 } 00:49:22.663 02:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:49:22.922 [2024-10-15 02:18:31.767732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:22.922 [2024-10-15 02:18:31.767781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:49:22.922 [2024-10-15 02:18:31.767816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:49:22.922 [2024-10-15 02:18:31.767828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:22.922 [2024-10-15 02:18:31.767861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:22.922 [2024-10-15 02:18:31.767892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:49:22.922 [2024-10-15 02:18:31.767904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:49:22.922 [2024-10-15 02:18:31.767915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:22.922 [2024-10-15 02:18:31.767940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:22.922 [2024-10-15 02:18:31.767954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:49:22.922 [2024-10-15 02:18:31.767965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:49:22.922 [2024-10-15 02:18:31.767976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:22.922 [2024-10-15 02:18:31.768043] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.297 ms, result 0 00:49:22.922 true 00:49:22.922 02:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:49:22.922 02:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:49:22.922 02:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:23.181 02:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:49:23.181 02:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:49:23.181 02:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:49:23.440 [2024-10-15 02:18:32.212135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:23.440 [2024-10-15 02:18:32.212177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:49:23.440 [2024-10-15 02:18:32.212210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:49:23.440 [2024-10-15 02:18:32.212221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:23.440 [2024-10-15 02:18:32.212251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:23.440 [2024-10-15 02:18:32.212266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:49:23.440 [2024-10-15 02:18:32.212277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:49:23.440 [2024-10-15 02:18:32.212287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:23.440 [2024-10-15 02:18:32.212311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:23.440 [2024-10-15 02:18:32.212324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:49:23.440 [2024-10-15 02:18:32.212335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:49:23.440 [2024-10-15 02:18:32.212345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:23.440 [2024-10-15 02:18:32.212436] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.253 ms, result 0 00:49:23.440 true 00:49:23.440 02:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:23.699 { 00:49:23.699 "name": "ftl", 00:49:23.699 "properties": [ 00:49:23.699 { 00:49:23.699 "name": "superblock_version", 00:49:23.699 "value": 5, 00:49:23.699 "read-only": true 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "name": "base_device", 00:49:23.699 "bands": [ 00:49:23.699 { 00:49:23.699 "id": 0, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 1, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 2, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 3, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 4, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 5, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 6, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 7, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 8, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 9, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 10, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 11, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 12, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 13, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 14, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 15, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 16, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "id": 17, 00:49:23.699 "state": "FREE", 00:49:23.699 "validity": 0.0 00:49:23.699 } 00:49:23.699 ], 00:49:23.699 "read-only": true 00:49:23.699 }, 00:49:23.699 { 00:49:23.699 "name": "cache_device", 00:49:23.700 "type": "bdev", 00:49:23.700 "chunks": [ 00:49:23.700 { 00:49:23.700 "id": 0, 00:49:23.700 "state": "INACTIVE", 00:49:23.700 "utilization": 0.0 00:49:23.700 }, 00:49:23.700 { 00:49:23.700 "id": 1, 00:49:23.700 "state": "CLOSED", 00:49:23.700 "utilization": 1.0 00:49:23.700 }, 00:49:23.700 { 00:49:23.700 "id": 2, 00:49:23.700 "state": "CLOSED", 00:49:23.700 "utilization": 1.0 00:49:23.700 }, 00:49:23.700 { 00:49:23.700 "id": 3, 00:49:23.700 "state": "OPEN", 00:49:23.700 "utilization": 0.001953125 00:49:23.700 }, 00:49:23.700 { 00:49:23.700 "id": 4, 00:49:23.700 "state": "OPEN", 00:49:23.700 "utilization": 0.0 00:49:23.700 } 00:49:23.700 ], 00:49:23.700 "read-only": true 00:49:23.700 }, 00:49:23.700 { 00:49:23.700 "name": "verbose_mode", 00:49:23.700 "value": true, 00:49:23.700 "unit": "", 00:49:23.700 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:49:23.700 }, 00:49:23.700 { 00:49:23.700 "name": "prep_upgrade_on_shutdown", 00:49:23.700 "value": true, 00:49:23.700 "unit": "", 00:49:23.700 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:49:23.700 } 00:49:23.700 ] 00:49:23.700 } 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81593 ]] 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81593 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81593 ']' 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81593 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81593 00:49:23.700 killing process with pid 81593 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81593' 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81593 00:49:23.700 02:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81593 00:49:24.637 [2024-10-15 02:18:33.367421] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:49:24.637 [2024-10-15 02:18:33.382885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:24.637 [2024-10-15 02:18:33.382930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:49:24.637 [2024-10-15 02:18:33.382965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:49:24.637 [2024-10-15 02:18:33.382977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:24.637 [2024-10-15 02:18:33.383012] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:49:24.637 [2024-10-15 02:18:33.386092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:24.637 [2024-10-15 02:18:33.386123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:49:24.637 [2024-10-15 02:18:33.386153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.059 ms 00:49:24.637 [2024-10-15 02:18:33.386165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.756 [2024-10-15 02:18:41.464500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.756 [2024-10-15 02:18:41.464582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:49:32.756 [2024-10-15 02:18:41.464603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8078.353 ms 00:49:32.756 [2024-10-15 02:18:41.464615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.756 [2024-10-15 02:18:41.465768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.756 [2024-10-15 02:18:41.465820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:49:32.756 [2024-10-15 02:18:41.465836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.120 ms 00:49:32.756 [2024-10-15 02:18:41.465849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.756 [2024-10-15 02:18:41.467058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.756 [2024-10-15 02:18:41.467100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:49:32.756 [2024-10-15 02:18:41.467116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.160 ms 00:49:32.756 [2024-10-15 02:18:41.467128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.756 [2024-10-15 02:18:41.477903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.756 [2024-10-15 02:18:41.477940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:49:32.756 [2024-10-15 02:18:41.477972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.737 ms 00:49:32.756 [2024-10-15 02:18:41.477983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.756 [2024-10-15 02:18:41.484936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.756 [2024-10-15 02:18:41.484975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:49:32.756 [2024-10-15 02:18:41.485013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.915 ms 00:49:32.756 [2024-10-15 02:18:41.485025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.756 [2024-10-15 02:18:41.485115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.756 [2024-10-15 02:18:41.485134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:49:32.756 [2024-10-15 02:18:41.485147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:49:32.756 [2024-10-15 02:18:41.485159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.756 [2024-10-15 02:18:41.495266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.756 [2024-10-15 02:18:41.495300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:49:32.756 [2024-10-15 02:18:41.495330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.088 ms 00:49:32.756 [2024-10-15 02:18:41.495342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.756 [2024-10-15 02:18:41.505315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.756 [2024-10-15 02:18:41.505364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:49:32.756 [2024-10-15 02:18:41.505394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.936 ms 00:49:32.756 [2024-10-15 02:18:41.505405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.756 [2024-10-15 02:18:41.515329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.756 [2024-10-15 02:18:41.515363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:49:32.756 [2024-10-15 02:18:41.515392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.879 ms 00:49:32.756 [2024-10-15 02:18:41.515402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.756 [2024-10-15 02:18:41.525140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.756 [2024-10-15 02:18:41.525191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:49:32.756 [2024-10-15 02:18:41.525221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.662 ms 00:49:32.756 [2024-10-15 02:18:41.525232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.756 [2024-10-15 02:18:41.525277] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:49:32.756 [2024-10-15 02:18:41.525298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:49:32.756 [2024-10-15 02:18:41.525316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:49:32.756 [2024-10-15 02:18:41.525330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:49:32.756 [2024-10-15 02:18:41.525360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:32.756 [2024-10-15 02:18:41.525565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:49:32.756 [2024-10-15 02:18:41.525576] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 42fc2f33-d538-40a1-b179-49da2798d81c 00:49:32.756 [2024-10-15 02:18:41.525593] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:49:32.756 [2024-10-15 02:18:41.525604] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:49:32.756 [2024-10-15 02:18:41.525614] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:49:32.756 [2024-10-15 02:18:41.525630] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:49:32.756 [2024-10-15 02:18:41.525643] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:49:32.756 [2024-10-15 02:18:41.525654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:49:32.756 [2024-10-15 02:18:41.525665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:49:32.757 [2024-10-15 02:18:41.525675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:49:32.757 [2024-10-15 02:18:41.525686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:49:32.757 [2024-10-15 02:18:41.525698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.757 [2024-10-15 02:18:41.525709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:49:32.757 [2024-10-15 02:18:41.525721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:49:32.757 [2024-10-15 02:18:41.525732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.539739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.757 [2024-10-15 02:18:41.539776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:49:32.757 [2024-10-15 02:18:41.539807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.983 ms 00:49:32.757 [2024-10-15 02:18:41.539820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.540265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:32.757 [2024-10-15 02:18:41.540288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:49:32.757 [2024-10-15 02:18:41.540302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.420 ms 00:49:32.757 [2024-10-15 02:18:41.540321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.580976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.581022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:49:32.757 [2024-10-15 02:18:41.581053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.581064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.581099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.581113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:49:32.757 [2024-10-15 02:18:41.581125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.581136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.581224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.581243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:49:32.757 [2024-10-15 02:18:41.581271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.581297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.581321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.581334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:49:32.757 [2024-10-15 02:18:41.581346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.581356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.665521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.665600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:49:32.757 [2024-10-15 02:18:41.665633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.665645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.734262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.734319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:49:32.757 [2024-10-15 02:18:41.734352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.734363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.734514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.734543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:49:32.757 [2024-10-15 02:18:41.734574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.734586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.734663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.734681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:49:32.757 [2024-10-15 02:18:41.734694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.734706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.734824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.734849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:49:32.757 [2024-10-15 02:18:41.734863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.734874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.734928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.734953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:49:32.757 [2024-10-15 02:18:41.734967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.734979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.735041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.735063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:49:32.757 [2024-10-15 02:18:41.735090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.735101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.735179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:32.757 [2024-10-15 02:18:41.735201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:49:32.757 [2024-10-15 02:18:41.735214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:32.757 [2024-10-15 02:18:41.735226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:32.757 [2024-10-15 02:18:41.735383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8352.511 ms, result 0 00:49:32.757 [2024-10-15 02:18:41.736449] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:49:32.757 [2024-10-15 02:18:41.739508] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035015920 was disconnected and freed. delete nvme_qpair. 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82173 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82173 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82173 ']' 00:49:36.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:36.044 02:18:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:49:36.044 [2024-10-15 02:18:44.953557] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:49:36.044 [2024-10-15 02:18:44.953712] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82173 ] 00:49:36.303 [2024-10-15 02:18:45.111248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:36.303 [2024-10-15 02:18:45.293911] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:49:37.257 [2024-10-15 02:18:46.106185] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:49:37.258 [2024-10-15 02:18:46.106275] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:49:37.258 [2024-10-15 02:18:46.239713] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:49:37.258 [2024-10-15 02:18:46.252301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.258 [2024-10-15 02:18:46.252355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:49:37.258 [2024-10-15 02:18:46.252389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:49:37.258 [2024-10-15 02:18:46.252400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.258 [2024-10-15 02:18:46.252489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.258 [2024-10-15 02:18:46.252509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:49:37.258 [2024-10-15 02:18:46.252522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:49:37.258 [2024-10-15 02:18:46.252532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.258 [2024-10-15 02:18:46.252578] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:49:37.258 [2024-10-15 02:18:46.253564] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:49:37.258 [2024-10-15 02:18:46.253599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.258 [2024-10-15 02:18:46.253613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:49:37.258 [2024-10-15 02:18:46.253630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.036 ms 00:49:37.258 [2024-10-15 02:18:46.253640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.258 [2024-10-15 02:18:46.255683] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:49:37.530 [2024-10-15 02:18:46.271589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.530 [2024-10-15 02:18:46.271653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:49:37.530 [2024-10-15 02:18:46.271686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.908 ms 00:49:37.530 [2024-10-15 02:18:46.271698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.530 [2024-10-15 02:18:46.271769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.530 [2024-10-15 02:18:46.271789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:49:37.530 [2024-10-15 02:18:46.271801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:49:37.530 [2024-10-15 02:18:46.271829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.531 [2024-10-15 02:18:46.281187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.531 [2024-10-15 02:18:46.281230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:49:37.531 [2024-10-15 02:18:46.281261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.202 ms 00:49:37.531 [2024-10-15 02:18:46.281288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.531 [2024-10-15 02:18:46.281389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.531 [2024-10-15 02:18:46.281410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:49:37.531 [2024-10-15 02:18:46.281427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:49:37.531 [2024-10-15 02:18:46.281438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.531 [2024-10-15 02:18:46.281533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.531 [2024-10-15 02:18:46.281552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:49:37.531 [2024-10-15 02:18:46.281565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:49:37.531 [2024-10-15 02:18:46.281577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.531 [2024-10-15 02:18:46.281616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:49:37.531 [2024-10-15 02:18:46.286215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.531 [2024-10-15 02:18:46.286263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:49:37.531 [2024-10-15 02:18:46.286292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.608 ms 00:49:37.531 [2024-10-15 02:18:46.286303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.531 [2024-10-15 02:18:46.286335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.531 [2024-10-15 02:18:46.286355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:49:37.531 [2024-10-15 02:18:46.286367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:49:37.531 [2024-10-15 02:18:46.286377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.531 [2024-10-15 02:18:46.286468] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:49:37.531 [2024-10-15 02:18:46.286502] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:49:37.531 [2024-10-15 02:18:46.286587] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:49:37.531 [2024-10-15 02:18:46.286611] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:49:37.531 [2024-10-15 02:18:46.286731] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:49:37.531 [2024-10-15 02:18:46.286747] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:49:37.531 [2024-10-15 02:18:46.286763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:49:37.531 [2024-10-15 02:18:46.286778] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:49:37.531 [2024-10-15 02:18:46.286791] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:49:37.531 [2024-10-15 02:18:46.286804] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:49:37.531 [2024-10-15 02:18:46.286816] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:49:37.531 [2024-10-15 02:18:46.286827] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:49:37.531 [2024-10-15 02:18:46.286838] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:49:37.531 [2024-10-15 02:18:46.286850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.531 [2024-10-15 02:18:46.286862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:49:37.531 [2024-10-15 02:18:46.286908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.387 ms 00:49:37.531 [2024-10-15 02:18:46.286919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.531 [2024-10-15 02:18:46.287029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.531 [2024-10-15 02:18:46.287044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:49:37.531 [2024-10-15 02:18:46.287055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:49:37.531 [2024-10-15 02:18:46.287065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.531 [2024-10-15 02:18:46.287173] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:49:37.531 [2024-10-15 02:18:46.287200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:49:37.531 [2024-10-15 02:18:46.287212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:49:37.531 [2024-10-15 02:18:46.287229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:49:37.531 [2024-10-15 02:18:46.287250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:49:37.531 [2024-10-15 02:18:46.287272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:49:37.531 [2024-10-15 02:18:46.287282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:49:37.531 [2024-10-15 02:18:46.287292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:49:37.531 [2024-10-15 02:18:46.287312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:49:37.531 [2024-10-15 02:18:46.287322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:49:37.531 [2024-10-15 02:18:46.287343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:49:37.531 [2024-10-15 02:18:46.287353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:49:37.531 [2024-10-15 02:18:46.287372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:49:37.531 [2024-10-15 02:18:46.287382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:49:37.531 [2024-10-15 02:18:46.287418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:49:37.531 [2024-10-15 02:18:46.287432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:37.531 [2024-10-15 02:18:46.287443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:49:37.531 [2024-10-15 02:18:46.287466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:49:37.531 [2024-10-15 02:18:46.287476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:37.531 [2024-10-15 02:18:46.287487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:49:37.531 [2024-10-15 02:18:46.287497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:49:37.531 [2024-10-15 02:18:46.287507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:37.531 [2024-10-15 02:18:46.287516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:49:37.531 [2024-10-15 02:18:46.287526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:49:37.531 [2024-10-15 02:18:46.287536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:37.531 [2024-10-15 02:18:46.287546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:49:37.531 [2024-10-15 02:18:46.287555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:49:37.531 [2024-10-15 02:18:46.287565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:49:37.531 [2024-10-15 02:18:46.287584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:49:37.531 [2024-10-15 02:18:46.287593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:49:37.531 [2024-10-15 02:18:46.287614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:49:37.531 [2024-10-15 02:18:46.287645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:49:37.531 [2024-10-15 02:18:46.287654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287664] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:49:37.531 [2024-10-15 02:18:46.287675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:49:37.531 [2024-10-15 02:18:46.287685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:49:37.531 [2024-10-15 02:18:46.287696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:37.531 [2024-10-15 02:18:46.287707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:49:37.531 [2024-10-15 02:18:46.287717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:49:37.531 [2024-10-15 02:18:46.287727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:49:37.531 [2024-10-15 02:18:46.287737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:49:37.531 [2024-10-15 02:18:46.287746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:49:37.531 [2024-10-15 02:18:46.287757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:49:37.531 [2024-10-15 02:18:46.287768] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:49:37.531 [2024-10-15 02:18:46.287782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:37.531 [2024-10-15 02:18:46.287795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:49:37.531 [2024-10-15 02:18:46.287807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:49:37.531 [2024-10-15 02:18:46.287817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:49:37.532 [2024-10-15 02:18:46.287828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:49:37.532 [2024-10-15 02:18:46.287838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:49:37.532 [2024-10-15 02:18:46.287849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:49:37.532 [2024-10-15 02:18:46.287859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:49:37.532 [2024-10-15 02:18:46.287869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:49:37.532 [2024-10-15 02:18:46.287879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:49:37.532 [2024-10-15 02:18:46.287889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:49:37.532 [2024-10-15 02:18:46.287899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:49:37.532 [2024-10-15 02:18:46.287908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:49:37.532 [2024-10-15 02:18:46.287918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:49:37.532 [2024-10-15 02:18:46.287929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:49:37.532 [2024-10-15 02:18:46.287939] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:49:37.532 [2024-10-15 02:18:46.287958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:37.532 [2024-10-15 02:18:46.287970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:49:37.532 [2024-10-15 02:18:46.287980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:49:37.532 [2024-10-15 02:18:46.287991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:49:37.532 [2024-10-15 02:18:46.288002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:49:37.532 [2024-10-15 02:18:46.288014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.532 [2024-10-15 02:18:46.288024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:49:37.532 [2024-10-15 02:18:46.288041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.903 ms 00:49:37.532 [2024-10-15 02:18:46.288051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.532 [2024-10-15 02:18:46.288112] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:49:37.532 [2024-10-15 02:18:46.288129] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:49:40.818 [2024-10-15 02:18:49.302925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.302997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:49:40.818 [2024-10-15 02:18:49.303046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3014.830 ms 00:49:40.818 [2024-10-15 02:18:49.303058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.336205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.336256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:49:40.818 [2024-10-15 02:18:49.336290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.821 ms 00:49:40.818 [2024-10-15 02:18:49.336300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.336433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.336453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:49:40.818 [2024-10-15 02:18:49.336466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:49:40.818 [2024-10-15 02:18:49.336476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.393239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.393284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:49:40.818 [2024-10-15 02:18:49.393316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 56.704 ms 00:49:40.818 [2024-10-15 02:18:49.393327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.393383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.393399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:49:40.818 [2024-10-15 02:18:49.393411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:49:40.818 [2024-10-15 02:18:49.393436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.394112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.394155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:49:40.818 [2024-10-15 02:18:49.394183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.541 ms 00:49:40.818 [2024-10-15 02:18:49.394194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.394254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.394269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:49:40.818 [2024-10-15 02:18:49.394280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:49:40.818 [2024-10-15 02:18:49.394291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.411932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.411986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:49:40.818 [2024-10-15 02:18:49.412002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.613 ms 00:49:40.818 [2024-10-15 02:18:49.412012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.425753] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:49:40.818 [2024-10-15 02:18:49.425792] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:49:40.818 [2024-10-15 02:18:49.425829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.425840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:49:40.818 [2024-10-15 02:18:49.425852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.641 ms 00:49:40.818 [2024-10-15 02:18:49.425862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.440096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.440134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:49:40.818 [2024-10-15 02:18:49.440166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.188 ms 00:49:40.818 [2024-10-15 02:18:49.440184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.452305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.452342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:49:40.818 [2024-10-15 02:18:49.452371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.075 ms 00:49:40.818 [2024-10-15 02:18:49.452381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.464648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.464685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:49:40.818 [2024-10-15 02:18:49.464714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.213 ms 00:49:40.818 [2024-10-15 02:18:49.464724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.465481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.465540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:49:40.818 [2024-10-15 02:18:49.465554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.639 ms 00:49:40.818 [2024-10-15 02:18:49.465566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.529242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.529319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:49:40.818 [2024-10-15 02:18:49.529353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 63.647 ms 00:49:40.818 [2024-10-15 02:18:49.529365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.539327] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:49:40.818 [2024-10-15 02:18:49.540073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.540128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:49:40.818 [2024-10-15 02:18:49.540143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.624 ms 00:49:40.818 [2024-10-15 02:18:49.540154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.540283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.540316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:49:40.818 [2024-10-15 02:18:49.540344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:49:40.818 [2024-10-15 02:18:49.540355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.540438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.540472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:49:40.818 [2024-10-15 02:18:49.540491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:49:40.818 [2024-10-15 02:18:49.540503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.540539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.540553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:49:40.818 [2024-10-15 02:18:49.540564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:49:40.818 [2024-10-15 02:18:49.540575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.540620] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:49:40.818 [2024-10-15 02:18:49.540636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.540652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:49:40.818 [2024-10-15 02:18:49.540663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:49:40.818 [2024-10-15 02:18:49.540677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.565577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.565635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:49:40.818 [2024-10-15 02:18:49.565651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.873 ms 00:49:40.818 [2024-10-15 02:18:49.565662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.565753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:40.818 [2024-10-15 02:18:49.565771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:49:40.818 [2024-10-15 02:18:49.565787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:49:40.818 [2024-10-15 02:18:49.565797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:40.818 [2024-10-15 02:18:49.567383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3314.518 ms, result 0 00:49:40.818 [2024-10-15 02:18:49.581975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:40.818 [2024-10-15 02:18:49.597979] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:49:40.818 [2024-10-15 02:18:49.606117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:49:40.818 02:18:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:40.818 02:18:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:49:40.818 02:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:49:40.818 02:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:49:40.818 02:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:49:41.077 [2024-10-15 02:18:49.834076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:41.077 [2024-10-15 02:18:49.834117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:49:41.077 [2024-10-15 02:18:49.834147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:49:41.077 [2024-10-15 02:18:49.834158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:41.077 [2024-10-15 02:18:49.834187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:41.077 [2024-10-15 02:18:49.834202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:49:41.077 [2024-10-15 02:18:49.834212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:49:41.077 [2024-10-15 02:18:49.834222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:41.077 [2024-10-15 02:18:49.834252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:41.077 [2024-10-15 02:18:49.834265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:49:41.077 [2024-10-15 02:18:49.834275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:49:41.077 [2024-10-15 02:18:49.834285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:41.077 [2024-10-15 02:18:49.834374] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.258 ms, result 0 00:49:41.077 true 00:49:41.077 02:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:41.336 { 00:49:41.336 "name": "ftl", 00:49:41.336 "properties": [ 00:49:41.336 { 00:49:41.336 "name": "superblock_version", 00:49:41.336 "value": 5, 00:49:41.336 "read-only": true 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "name": "base_device", 00:49:41.336 "bands": [ 00:49:41.336 { 00:49:41.336 "id": 0, 00:49:41.336 "state": "CLOSED", 00:49:41.336 "validity": 1.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 1, 00:49:41.336 "state": "CLOSED", 00:49:41.336 "validity": 1.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 2, 00:49:41.336 "state": "CLOSED", 00:49:41.336 "validity": 0.007843137254901933 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 3, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 4, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 5, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 6, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 7, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 8, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 9, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 10, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 11, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 12, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 13, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 14, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 15, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 16, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 17, 00:49:41.336 "state": "FREE", 00:49:41.336 "validity": 0.0 00:49:41.336 } 00:49:41.336 ], 00:49:41.336 "read-only": true 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "name": "cache_device", 00:49:41.336 "type": "bdev", 00:49:41.336 "chunks": [ 00:49:41.336 { 00:49:41.336 "id": 0, 00:49:41.336 "state": "INACTIVE", 00:49:41.336 "utilization": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 1, 00:49:41.336 "state": "OPEN", 00:49:41.336 "utilization": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 2, 00:49:41.336 "state": "OPEN", 00:49:41.336 "utilization": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 3, 00:49:41.336 "state": "FREE", 00:49:41.336 "utilization": 0.0 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "id": 4, 00:49:41.336 "state": "FREE", 00:49:41.336 "utilization": 0.0 00:49:41.336 } 00:49:41.336 ], 00:49:41.336 "read-only": true 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "name": "verbose_mode", 00:49:41.336 "value": true, 00:49:41.336 "unit": "", 00:49:41.336 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:49:41.336 }, 00:49:41.336 { 00:49:41.336 "name": "prep_upgrade_on_shutdown", 00:49:41.336 "value": false, 00:49:41.336 "unit": "", 00:49:41.336 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:49:41.336 } 00:49:41.336 ] 00:49:41.336 } 00:49:41.336 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:49:41.336 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:41.336 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:49:41.595 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:49:41.595 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:49:41.595 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:49:41.595 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:41.595 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:49:41.853 Validate MD5 checksum, iteration 1 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:49:41.853 02:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:49:41.853 [2024-10-15 02:18:50.801850] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:49:41.853 [2024-10-15 02:18:50.802042] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82248 ] 00:49:42.111 [2024-10-15 02:18:50.959358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:42.370 [2024-10-15 02:18:51.163165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:49:42.628 [2024-10-15 02:18:51.577046] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030500 was disconnected and freed. delete nvme_qpair. 00:49:44.004  [2024-10-15T02:18:53.953Z] Copying: 478/1024 [MB] (478 MBps) [2024-10-15T02:18:54.211Z] Copying: 934/1024 [MB] (456 MBps) [2024-10-15T02:18:54.211Z] Copying: 1024/1024 [MB] (average 465 MBps)[2024-10-15 02:18:54.156786] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030780 was disconnected and freed. delete nvme_qpair. 00:49:46.575 00:49:46.575 00:49:46.575 02:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:49:46.575 02:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:49:48.477 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:49:48.477 Validate MD5 checksum, iteration 2 00:49:48.477 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b655bb071198f4fbccc7dab2b1fec1ba 00:49:48.477 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b655bb071198f4fbccc7dab2b1fec1ba != \b\6\5\5\b\b\0\7\1\1\9\8\f\4\f\b\c\c\c\7\d\a\b\2\b\1\f\e\c\1\b\a ]] 00:49:48.477 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:49:48.477 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:49:48.478 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:49:48.478 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:49:48.478 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:49:48.478 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:49:48.478 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:49:48.478 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:49:48.478 02:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:49:48.478 [2024-10-15 02:18:57.092611] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:49:48.478 [2024-10-15 02:18:57.092969] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82315 ] 00:49:48.478 [2024-10-15 02:18:57.270915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:48.736 [2024-10-15 02:18:57.518233] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:49:48.995 [2024-10-15 02:18:57.922493] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030500 was disconnected and freed. delete nvme_qpair. 00:49:50.372  [2024-10-15T02:19:00.319Z] Copying: 482/1024 [MB] (482 MBps) [2024-10-15T02:19:00.577Z] Copying: 913/1024 [MB] (431 MBps) [2024-10-15T02:19:01.142Z] Copying: 1024/1024 [MB] (average 458 MBps)[2024-10-15 02:19:00.843082] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030780 was disconnected and freed. delete nvme_qpair. 00:49:53.072 00:49:53.072 00:49:53.072 02:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:49:53.072 02:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a38ec1daf667dd9da657274657aa74e9 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a38ec1daf667dd9da657274657aa74e9 != \a\3\8\e\c\1\d\a\f\6\6\7\d\d\9\d\a\6\5\7\2\7\4\6\5\7\a\a\7\4\e\9 ]] 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 82173 ]] 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 82173 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82387 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82387 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82387 ']' 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:54.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:54.975 02:19:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:49:54.975 [2024-10-15 02:19:03.625479] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:49:54.975 [2024-10-15 02:19:03.625653] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82387 ] 00:49:54.975 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 82173 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:49:54.975 [2024-10-15 02:19:03.799734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:54.975 [2024-10-15 02:19:03.980771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:49:55.911 [2024-10-15 02:19:04.802826] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:49:55.911 [2024-10-15 02:19:04.802890] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:49:56.172 [2024-10-15 02:19:04.936469] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:49:56.172 [2024-10-15 02:19:04.949090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.949131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:49:56.172 [2024-10-15 02:19:04.949148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:49:56.172 [2024-10-15 02:19:04.949159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.949217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.949234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:49:56.172 [2024-10-15 02:19:04.949244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:49:56.172 [2024-10-15 02:19:04.949253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.949293] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:49:56.172 [2024-10-15 02:19:04.950018] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:49:56.172 [2024-10-15 02:19:04.950049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.950059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:49:56.172 [2024-10-15 02:19:04.950075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.768 ms 00:49:56.172 [2024-10-15 02:19:04.950084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.950470] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:49:56.172 [2024-10-15 02:19:04.968359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.968397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:49:56.172 [2024-10-15 02:19:04.968429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.890 ms 00:49:56.172 [2024-10-15 02:19:04.968439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.977758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.977794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:49:56.172 [2024-10-15 02:19:04.977824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:49:56.172 [2024-10-15 02:19:04.977833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.978245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.978270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:49:56.172 [2024-10-15 02:19:04.978283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.319 ms 00:49:56.172 [2024-10-15 02:19:04.978292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.978351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.978368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:49:56.172 [2024-10-15 02:19:04.978378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:49:56.172 [2024-10-15 02:19:04.978387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.978439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.978455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:49:56.172 [2024-10-15 02:19:04.978469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:49:56.172 [2024-10-15 02:19:04.978479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.978506] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:49:56.172 [2024-10-15 02:19:04.981637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.981669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:49:56.172 [2024-10-15 02:19:04.981682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.137 ms 00:49:56.172 [2024-10-15 02:19:04.981691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.981719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.981732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:49:56.172 [2024-10-15 02:19:04.981742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:49:56.172 [2024-10-15 02:19:04.981751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.981778] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:49:56.172 [2024-10-15 02:19:04.981802] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:49:56.172 [2024-10-15 02:19:04.981838] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:49:56.172 [2024-10-15 02:19:04.981854] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:49:56.172 [2024-10-15 02:19:04.981940] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:49:56.172 [2024-10-15 02:19:04.981954] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:49:56.172 [2024-10-15 02:19:04.981966] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:49:56.172 [2024-10-15 02:19:04.981978] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:49:56.172 [2024-10-15 02:19:04.981989] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:49:56.172 [2024-10-15 02:19:04.982003] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:49:56.172 [2024-10-15 02:19:04.982012] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:49:56.172 [2024-10-15 02:19:04.982020] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:49:56.172 [2024-10-15 02:19:04.982028] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:49:56.172 [2024-10-15 02:19:04.982037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.982047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:49:56.172 [2024-10-15 02:19:04.982056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:49:56.172 [2024-10-15 02:19:04.982064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.982154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.172 [2024-10-15 02:19:04.982182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:49:56.172 [2024-10-15 02:19:04.982198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:49:56.172 [2024-10-15 02:19:04.982207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.172 [2024-10-15 02:19:04.982314] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:49:56.172 [2024-10-15 02:19:04.982330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:49:56.172 [2024-10-15 02:19:04.982340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:49:56.172 [2024-10-15 02:19:04.982349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:56.172 [2024-10-15 02:19:04.982359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:49:56.172 [2024-10-15 02:19:04.982370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:49:56.172 [2024-10-15 02:19:04.982380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:49:56.172 [2024-10-15 02:19:04.982389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:49:56.172 [2024-10-15 02:19:04.982398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:49:56.172 [2024-10-15 02:19:04.982407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:56.172 [2024-10-15 02:19:04.982429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:49:56.172 [2024-10-15 02:19:04.982442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:49:56.172 [2024-10-15 02:19:04.982452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:56.172 [2024-10-15 02:19:04.982461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:49:56.172 [2024-10-15 02:19:04.982470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:49:56.173 [2024-10-15 02:19:04.982479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:56.173 [2024-10-15 02:19:04.982488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:49:56.173 [2024-10-15 02:19:04.982497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:49:56.173 [2024-10-15 02:19:04.982506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:56.173 [2024-10-15 02:19:04.982516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:49:56.173 [2024-10-15 02:19:04.982525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:49:56.173 [2024-10-15 02:19:04.982573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:56.173 [2024-10-15 02:19:04.982584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:49:56.173 [2024-10-15 02:19:04.982593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:49:56.173 [2024-10-15 02:19:04.982603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:56.173 [2024-10-15 02:19:04.982612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:49:56.173 [2024-10-15 02:19:04.982621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:49:56.173 [2024-10-15 02:19:04.982630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:56.173 [2024-10-15 02:19:04.982639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:49:56.173 [2024-10-15 02:19:04.982647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:49:56.173 [2024-10-15 02:19:04.982656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:56.173 [2024-10-15 02:19:04.982665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:49:56.173 [2024-10-15 02:19:04.982674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:49:56.173 [2024-10-15 02:19:04.982683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:56.173 [2024-10-15 02:19:04.982707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:49:56.173 [2024-10-15 02:19:04.982716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:49:56.173 [2024-10-15 02:19:04.982726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:56.173 [2024-10-15 02:19:04.982736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:49:56.173 [2024-10-15 02:19:04.982746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:49:56.173 [2024-10-15 02:19:04.982754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:56.173 [2024-10-15 02:19:04.982763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:49:56.173 [2024-10-15 02:19:04.982772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:49:56.173 [2024-10-15 02:19:04.982781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:56.173 [2024-10-15 02:19:04.982790] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:49:56.173 [2024-10-15 02:19:04.982800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:49:56.173 [2024-10-15 02:19:04.982809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:49:56.173 [2024-10-15 02:19:04.982818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:56.173 [2024-10-15 02:19:04.982834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:49:56.173 [2024-10-15 02:19:04.982843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:49:56.173 [2024-10-15 02:19:04.982852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:49:56.173 [2024-10-15 02:19:04.982862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:49:56.173 [2024-10-15 02:19:04.982871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:49:56.173 [2024-10-15 02:19:04.982880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:49:56.173 [2024-10-15 02:19:04.982891] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:49:56.173 [2024-10-15 02:19:04.982903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:56.173 [2024-10-15 02:19:04.982914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:49:56.173 [2024-10-15 02:19:04.982923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:49:56.173 [2024-10-15 02:19:04.982933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:49:56.173 [2024-10-15 02:19:04.982943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:49:56.173 [2024-10-15 02:19:04.982953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:49:56.173 [2024-10-15 02:19:04.982962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:49:56.173 [2024-10-15 02:19:04.982971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:49:56.173 [2024-10-15 02:19:04.982995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:49:56.173 [2024-10-15 02:19:04.983004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:49:56.173 [2024-10-15 02:19:04.983013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:49:56.173 [2024-10-15 02:19:04.983022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:49:56.173 [2024-10-15 02:19:04.983031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:49:56.173 [2024-10-15 02:19:04.983039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:49:56.173 [2024-10-15 02:19:04.983048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:49:56.173 [2024-10-15 02:19:04.983059] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:49:56.173 [2024-10-15 02:19:04.983069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:56.173 [2024-10-15 02:19:04.983079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:49:56.173 [2024-10-15 02:19:04.983089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:49:56.173 [2024-10-15 02:19:04.983098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:49:56.173 [2024-10-15 02:19:04.983107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:49:56.173 [2024-10-15 02:19:04.983117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.173 [2024-10-15 02:19:04.983126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:49:56.173 [2024-10-15 02:19:04.983135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:49:56.173 [2024-10-15 02:19:04.983145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.173 [2024-10-15 02:19:05.013578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.173 [2024-10-15 02:19:05.013624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:49:56.173 [2024-10-15 02:19:05.013641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.372 ms 00:49:56.173 [2024-10-15 02:19:05.013656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.173 [2024-10-15 02:19:05.013709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.173 [2024-10-15 02:19:05.013724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:49:56.173 [2024-10-15 02:19:05.013742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:49:56.173 [2024-10-15 02:19:05.013752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.173 [2024-10-15 02:19:05.061668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.173 [2024-10-15 02:19:05.061714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:49:56.173 [2024-10-15 02:19:05.061732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.843 ms 00:49:56.173 [2024-10-15 02:19:05.061742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.173 [2024-10-15 02:19:05.061800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.173 [2024-10-15 02:19:05.061817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:49:56.173 [2024-10-15 02:19:05.061829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:49:56.173 [2024-10-15 02:19:05.061838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.173 [2024-10-15 02:19:05.062002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.173 [2024-10-15 02:19:05.062020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:49:56.173 [2024-10-15 02:19:05.062033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:49:56.173 [2024-10-15 02:19:05.062043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.173 [2024-10-15 02:19:05.062106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.173 [2024-10-15 02:19:05.062131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:49:56.173 [2024-10-15 02:19:05.062142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:49:56.173 [2024-10-15 02:19:05.062152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.173 [2024-10-15 02:19:05.079714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.173 [2024-10-15 02:19:05.079755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:49:56.173 [2024-10-15 02:19:05.079770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.535 ms 00:49:56.173 [2024-10-15 02:19:05.079780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.173 [2024-10-15 02:19:05.079928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.174 [2024-10-15 02:19:05.079953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:49:56.174 [2024-10-15 02:19:05.079965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:49:56.174 [2024-10-15 02:19:05.079980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.174 [2024-10-15 02:19:05.098206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.174 [2024-10-15 02:19:05.098262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:49:56.174 [2024-10-15 02:19:05.098279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.176 ms 00:49:56.174 [2024-10-15 02:19:05.098297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.174 [2024-10-15 02:19:05.108061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.174 [2024-10-15 02:19:05.108097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:49:56.174 [2024-10-15 02:19:05.108112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:49:56.174 [2024-10-15 02:19:05.108122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.174 [2024-10-15 02:19:05.170478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.174 [2024-10-15 02:19:05.170549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:49:56.174 [2024-10-15 02:19:05.170569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 62.290 ms 00:49:56.174 [2024-10-15 02:19:05.170580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.174 [2024-10-15 02:19:05.170753] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:49:56.174 [2024-10-15 02:19:05.170880] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:49:56.174 [2024-10-15 02:19:05.171008] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:49:56.174 [2024-10-15 02:19:05.171123] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:49:56.174 [2024-10-15 02:19:05.171137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.174 [2024-10-15 02:19:05.171147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:49:56.174 [2024-10-15 02:19:05.171165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.498 ms 00:49:56.174 [2024-10-15 02:19:05.171174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.174 [2024-10-15 02:19:05.171275] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:49:56.174 [2024-10-15 02:19:05.171294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.174 [2024-10-15 02:19:05.171305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:49:56.174 [2024-10-15 02:19:05.171323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:49:56.174 [2024-10-15 02:19:05.171334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.433 [2024-10-15 02:19:05.187731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.433 [2024-10-15 02:19:05.187774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:49:56.433 [2024-10-15 02:19:05.187807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.369 ms 00:49:56.433 [2024-10-15 02:19:05.187818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.433 [2024-10-15 02:19:05.197185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.433 [2024-10-15 02:19:05.197225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:49:56.433 [2024-10-15 02:19:05.197244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:49:56.433 [2024-10-15 02:19:05.197253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:56.433 [2024-10-15 02:19:05.197359] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:49:56.433 [2024-10-15 02:19:05.197626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:56.433 [2024-10-15 02:19:05.197649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:49:56.433 [2024-10-15 02:19:05.197661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:49:56.433 [2024-10-15 02:19:05.197671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.001 [2024-10-15 02:19:05.808752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.001 [2024-10-15 02:19:05.808842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:49:57.001 [2024-10-15 02:19:05.808893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 610.064 ms 00:49:57.001 [2024-10-15 02:19:05.808935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.001 [2024-10-15 02:19:05.813429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.001 [2024-10-15 02:19:05.813469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:49:57.001 [2024-10-15 02:19:05.813485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.124 ms 00:49:57.001 [2024-10-15 02:19:05.813497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.001 [2024-10-15 02:19:05.814034] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:49:57.001 [2024-10-15 02:19:05.814069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.001 [2024-10-15 02:19:05.814083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:49:57.001 [2024-10-15 02:19:05.814096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.532 ms 00:49:57.001 [2024-10-15 02:19:05.814115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.001 [2024-10-15 02:19:05.814188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.001 [2024-10-15 02:19:05.814205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:49:57.001 [2024-10-15 02:19:05.814217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:49:57.001 [2024-10-15 02:19:05.814228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.001 [2024-10-15 02:19:05.814303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 616.915 ms, result 0 00:49:57.001 [2024-10-15 02:19:05.814352] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:49:57.001 [2024-10-15 02:19:05.814433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.001 [2024-10-15 02:19:05.814445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:49:57.001 [2024-10-15 02:19:05.814456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:49:57.001 [2024-10-15 02:19:05.814482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.568 [2024-10-15 02:19:06.431417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.568 [2024-10-15 02:19:06.431514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:49:57.569 [2024-10-15 02:19:06.431581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 615.851 ms 00:49:57.569 [2024-10-15 02:19:06.431593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.435823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.435865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:49:57.569 [2024-10-15 02:19:06.435897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.040 ms 00:49:57.569 [2024-10-15 02:19:06.435907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.436479] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:49:57.569 [2024-10-15 02:19:06.436512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.436525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:49:57.569 [2024-10-15 02:19:06.436536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.534 ms 00:49:57.569 [2024-10-15 02:19:06.436561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.436617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.436634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:49:57.569 [2024-10-15 02:19:06.436645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:49:57.569 [2024-10-15 02:19:06.436655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.436730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 622.376 ms, result 0 00:49:57.569 [2024-10-15 02:19:06.436782] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:49:57.569 [2024-10-15 02:19:06.436799] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:49:57.569 [2024-10-15 02:19:06.436811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.436827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:49:57.569 [2024-10-15 02:19:06.436838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1239.482 ms 00:49:57.569 [2024-10-15 02:19:06.436848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.436882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.436896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:49:57.569 [2024-10-15 02:19:06.436907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:49:57.569 [2024-10-15 02:19:06.436916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.447837] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:49:57.569 [2024-10-15 02:19:06.447992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.448010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:49:57.569 [2024-10-15 02:19:06.448023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.057 ms 00:49:57.569 [2024-10-15 02:19:06.448033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.448708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.448736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:49:57.569 [2024-10-15 02:19:06.448749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.573 ms 00:49:57.569 [2024-10-15 02:19:06.448760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.450785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.450814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:49:57.569 [2024-10-15 02:19:06.450827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.003 ms 00:49:57.569 [2024-10-15 02:19:06.450856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.450899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.450913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:49:57.569 [2024-10-15 02:19:06.450924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:49:57.569 [2024-10-15 02:19:06.450934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.451040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.451056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:49:57.569 [2024-10-15 02:19:06.451083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:49:57.569 [2024-10-15 02:19:06.451093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.451124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.451136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:49:57.569 [2024-10-15 02:19:06.451147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:49:57.569 [2024-10-15 02:19:06.451157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.451194] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:49:57.569 [2024-10-15 02:19:06.451210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.451220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:49:57.569 [2024-10-15 02:19:06.451230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:49:57.569 [2024-10-15 02:19:06.451240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.451299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:57.569 [2024-10-15 02:19:06.451318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:49:57.569 [2024-10-15 02:19:06.451330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:49:57.569 [2024-10-15 02:19:06.451339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:57.569 [2024-10-15 02:19:06.452764] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1503.062 ms, result 0 00:49:57.569 [2024-10-15 02:19:06.468346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:57.569 [2024-10-15 02:19:06.484341] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:49:57.569 [2024-10-15 02:19:06.493295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:49:57.569 Validate MD5 checksum, iteration 1 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:49:57.569 02:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:49:57.827 [2024-10-15 02:19:06.641291] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:49:57.827 [2024-10-15 02:19:06.641496] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82423 ] 00:49:57.827 [2024-10-15 02:19:06.812770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:58.085 [2024-10-15 02:19:07.065903] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:49:58.651 [2024-10-15 02:19:07.472273] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030500 was disconnected and freed. delete nvme_qpair. 00:50:00.027  [2024-10-15T02:19:10.048Z] Copying: 463/1024 [MB] (463 MBps) [2024-10-15T02:19:10.048Z] Copying: 909/1024 [MB] (446 MBps) [2024-10-15T02:19:10.307Z] Copying: 1024/1024 [MB] (average 454 MBps)[2024-10-15 02:19:10.262666] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030780 was disconnected and freed. delete nvme_qpair. 00:50:02.670 00:50:02.670 00:50:02.670 02:19:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:50:02.670 02:19:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:50:04.574 Validate MD5 checksum, iteration 2 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b655bb071198f4fbccc7dab2b1fec1ba 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b655bb071198f4fbccc7dab2b1fec1ba != \b\6\5\5\b\b\0\7\1\1\9\8\f\4\f\b\c\c\c\7\d\a\b\2\b\1\f\e\c\1\b\a ]] 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:50:04.574 02:19:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:50:04.574 [2024-10-15 02:19:13.301455] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:50:04.574 [2024-10-15 02:19:13.301614] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82492 ] 00:50:04.574 [2024-10-15 02:19:13.467585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:04.833 [2024-10-15 02:19:13.711832] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:50:05.400 [2024-10-15 02:19:14.117479] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030500 was disconnected and freed. delete nvme_qpair. 00:50:06.336  [2024-10-15T02:19:16.725Z] Copying: 453/1024 [MB] (453 MBps) [2024-10-15T02:19:16.725Z] Copying: 925/1024 [MB] (472 MBps) [2024-10-15T02:19:16.983Z] Copying: 1024/1024 [MB] (average 456 MBps)[2024-10-15 02:19:16.876216] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [nqn.2018-09.io.spdk:cnode0] qpair 0x615000030780 was disconnected and freed. delete nvme_qpair. 00:50:08.905 00:50:08.905 00:50:08.905 02:19:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:50:08.905 02:19:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a38ec1daf667dd9da657274657aa74e9 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a38ec1daf667dd9da657274657aa74e9 != \a\3\8\e\c\1\d\a\f\6\6\7\d\d\9\d\a\6\5\7\2\7\4\6\5\7\a\a\7\4\e\9 ]] 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82387 ]] 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82387 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82387 ']' 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 82387 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82387 00:50:10.821 killing process with pid 82387 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82387' 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 82387 00:50:10.821 02:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 82387 00:50:11.761 [2024-10-15 02:19:20.622388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:50:11.762 [2024-10-15 02:19:20.638919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.638976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:50:11.762 [2024-10-15 02:19:20.639016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:50:11.762 [2024-10-15 02:19:20.639027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.639056] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:50:11.762 [2024-10-15 02:19:20.642184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.642212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:50:11.762 [2024-10-15 02:19:20.642240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.108 ms 00:50:11.762 [2024-10-15 02:19:20.642250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.642578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.642597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:50:11.762 [2024-10-15 02:19:20.642609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:50:11.762 [2024-10-15 02:19:20.642620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.643917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.643971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:50:11.762 [2024-10-15 02:19:20.643985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.268 ms 00:50:11.762 [2024-10-15 02:19:20.643997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.645164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.645207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:50:11.762 [2024-10-15 02:19:20.645235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.109 ms 00:50:11.762 [2024-10-15 02:19:20.645245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.655349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.655385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:50:11.762 [2024-10-15 02:19:20.655415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.041 ms 00:50:11.762 [2024-10-15 02:19:20.655437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.661049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.661083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:50:11.762 [2024-10-15 02:19:20.661114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.574 ms 00:50:11.762 [2024-10-15 02:19:20.661124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.661208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.661224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:50:11.762 [2024-10-15 02:19:20.661235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:50:11.762 [2024-10-15 02:19:20.661245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.671610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.671659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:50:11.762 [2024-10-15 02:19:20.671688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.345 ms 00:50:11.762 [2024-10-15 02:19:20.671697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.681749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.681782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:50:11.762 [2024-10-15 02:19:20.681810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.014 ms 00:50:11.762 [2024-10-15 02:19:20.681819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.691668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.691714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:50:11.762 [2024-10-15 02:19:20.691742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.812 ms 00:50:11.762 [2024-10-15 02:19:20.691752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.701644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.701692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:50:11.762 [2024-10-15 02:19:20.701720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.807 ms 00:50:11.762 [2024-10-15 02:19:20.701729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.701766] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:50:11.762 [2024-10-15 02:19:20.701803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:50:11.762 [2024-10-15 02:19:20.701815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:50:11.762 [2024-10-15 02:19:20.701825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:50:11.762 [2024-10-15 02:19:20.701836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.701846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.701856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.701866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.701876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.701891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.701901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.701911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.701920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.701945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.701971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.701997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.702008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.702019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.702029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:11.762 [2024-10-15 02:19:20.702042] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:50:11.762 [2024-10-15 02:19:20.702052] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 42fc2f33-d538-40a1-b179-49da2798d81c 00:50:11.762 [2024-10-15 02:19:20.702063] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:50:11.762 [2024-10-15 02:19:20.702073] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:50:11.762 [2024-10-15 02:19:20.702091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:50:11.762 [2024-10-15 02:19:20.702101] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:50:11.762 [2024-10-15 02:19:20.702111] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:50:11.762 [2024-10-15 02:19:20.702121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:50:11.762 [2024-10-15 02:19:20.702131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:50:11.762 [2024-10-15 02:19:20.702140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:50:11.762 [2024-10-15 02:19:20.702149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:50:11.762 [2024-10-15 02:19:20.702159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.702170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:50:11.762 [2024-10-15 02:19:20.702181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:50:11.762 [2024-10-15 02:19:20.702194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.718406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.718486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:50:11.762 [2024-10-15 02:19:20.718504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.189 ms 00:50:11.762 [2024-10-15 02:19:20.718527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.719031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:11.762 [2024-10-15 02:19:20.719051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:50:11.762 [2024-10-15 02:19:20.719064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.455 ms 00:50:11.762 [2024-10-15 02:19:20.719074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.762944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:11.762 [2024-10-15 02:19:20.762997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:50:11.762 [2024-10-15 02:19:20.763011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:11.762 [2024-10-15 02:19:20.763021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.763073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:11.762 [2024-10-15 02:19:20.763086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:50:11.762 [2024-10-15 02:19:20.763096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:11.762 [2024-10-15 02:19:20.763105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.763205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:11.762 [2024-10-15 02:19:20.763258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:50:11.762 [2024-10-15 02:19:20.763285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:11.762 [2024-10-15 02:19:20.763295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:11.762 [2024-10-15 02:19:20.763320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:11.762 [2024-10-15 02:19:20.763332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:50:11.762 [2024-10-15 02:19:20.763343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:11.762 [2024-10-15 02:19:20.763353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:12.021 [2024-10-15 02:19:20.848264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:12.021 [2024-10-15 02:19:20.848322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:50:12.021 [2024-10-15 02:19:20.848337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:12.021 [2024-10-15 02:19:20.848346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:12.021 [2024-10-15 02:19:20.918031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:12.021 [2024-10-15 02:19:20.918077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:50:12.021 [2024-10-15 02:19:20.918093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:12.021 [2024-10-15 02:19:20.918103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:12.021 [2024-10-15 02:19:20.918195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:12.021 [2024-10-15 02:19:20.918212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:50:12.021 [2024-10-15 02:19:20.918230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:12.021 [2024-10-15 02:19:20.918240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:12.021 [2024-10-15 02:19:20.918332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:12.021 [2024-10-15 02:19:20.918353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:50:12.021 [2024-10-15 02:19:20.918364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:12.021 [2024-10-15 02:19:20.918373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:12.021 [2024-10-15 02:19:20.918519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:12.021 [2024-10-15 02:19:20.918539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:50:12.021 [2024-10-15 02:19:20.918561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:12.021 [2024-10-15 02:19:20.918603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:12.021 [2024-10-15 02:19:20.918696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:12.021 [2024-10-15 02:19:20.918715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:50:12.021 [2024-10-15 02:19:20.918733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:12.021 [2024-10-15 02:19:20.918756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:12.021 [2024-10-15 02:19:20.918805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:12.021 [2024-10-15 02:19:20.918821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:50:12.021 [2024-10-15 02:19:20.918833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:12.021 [2024-10-15 02:19:20.918851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:12.021 [2024-10-15 02:19:20.918908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:12.021 [2024-10-15 02:19:20.918924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:50:12.021 [2024-10-15 02:19:20.918937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:12.021 [2024-10-15 02:19:20.918948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:12.021 [2024-10-15 02:19:20.919146] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 280.141 ms, result 0 00:50:12.021 [2024-10-15 02:19:20.920214] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:10.0] qpair 0x20001c438da0 was disconnected and freed. delete nvme_qpair. 00:50:12.021 [2024-10-15 02:19:20.923355] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035015720 was disconnected and freed. delete nvme_qpair. 00:50:13.399 02:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:50:13.399 02:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:50:13.399 02:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:50:13.399 02:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:50:13.399 02:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:50:13.399 02:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:50:13.399 Remove shared memory files 00:50:13.399 02:19:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:50:13.399 02:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:50:13.399 02:19:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:50:13.399 02:19:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:50:13.399 02:19:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid82173 00:50:13.399 02:19:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:50:13.399 02:19:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:50:13.399 00:50:13.399 real 1m29.883s 00:50:13.399 user 2m4.494s 00:50:13.399 sys 0m25.726s 00:50:13.399 02:19:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:50:13.399 02:19:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:50:13.399 ************************************ 00:50:13.399 END TEST ftl_upgrade_shutdown 00:50:13.399 ************************************ 00:50:13.399 02:19:22 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:50:13.399 02:19:22 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:50:13.399 02:19:22 ftl -- ftl/ftl.sh@14 -- # killprocess 74532 00:50:13.399 02:19:22 ftl -- common/autotest_common.sh@950 -- # '[' -z 74532 ']' 00:50:13.399 02:19:22 ftl -- common/autotest_common.sh@954 -- # kill -0 74532 00:50:13.399 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74532) - No such process 00:50:13.399 02:19:22 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 74532 is not found' 00:50:13.399 Process with pid 74532 is not found 00:50:13.399 02:19:22 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:50:13.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:13.399 02:19:22 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82621 00:50:13.399 02:19:22 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82621 00:50:13.399 02:19:22 ftl -- common/autotest_common.sh@831 -- # '[' -z 82621 ']' 00:50:13.399 02:19:22 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:13.399 02:19:22 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:13.399 02:19:22 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:50:13.399 02:19:22 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:13.399 02:19:22 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:50:13.399 02:19:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:50:13.399 [2024-10-15 02:19:22.157157] Starting SPDK v25.01-pre git sha1 d056e7588 / DPDK 24.03.0 initialization... 00:50:13.399 [2024-10-15 02:19:22.157294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82621 ] 00:50:13.399 [2024-10-15 02:19:22.317256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:13.658 [2024-10-15 02:19:22.541995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:50:14.594 02:19:23 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:50:14.594 02:19:23 ftl -- common/autotest_common.sh@864 -- # return 0 00:50:14.594 02:19:23 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:50:14.594 nvme0n1 00:50:14.594 02:19:23 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:50:14.595 02:19:23 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:50:14.595 02:19:23 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:50:14.853 02:19:23 ftl -- ftl/common.sh@28 -- # stores=c5eca432-877f-4799-ad97-ced9694a46f3 00:50:14.853 02:19:23 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:50:14.853 02:19:23 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c5eca432-877f-4799-ad97-ced9694a46f3 00:50:15.112 [2024-10-15 02:19:24.063887] bdev_nvme.c:1777:bdev_nvme_disconnected_qpair_cb: *NOTICE*: [0000:00:11.0] qpair 0x200035015720 was disconnected and freed. delete nvme_qpair. 00:50:15.112 02:19:24 ftl -- ftl/ftl.sh@23 -- # killprocess 82621 00:50:15.112 02:19:24 ftl -- common/autotest_common.sh@950 -- # '[' -z 82621 ']' 00:50:15.112 02:19:24 ftl -- common/autotest_common.sh@954 -- # kill -0 82621 00:50:15.112 02:19:24 ftl -- common/autotest_common.sh@955 -- # uname 00:50:15.112 02:19:24 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:15.112 02:19:24 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82621 00:50:15.112 02:19:24 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:50:15.112 02:19:24 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:50:15.112 killing process with pid 82621 00:50:15.112 02:19:24 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82621' 00:50:15.112 02:19:24 ftl -- common/autotest_common.sh@969 -- # kill 82621 00:50:15.112 02:19:24 ftl -- common/autotest_common.sh@974 -- # wait 82621 00:50:17.646 02:19:26 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:50:17.646 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:50:17.646 Waiting for block devices as requested 00:50:17.646 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:50:17.646 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:50:17.904 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:50:17.904 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:50:23.172 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:50:23.172 02:19:31 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:50:23.172 Remove shared memory files 00:50:23.172 02:19:31 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:50:23.172 02:19:31 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:50:23.172 02:19:31 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:50:23.172 02:19:31 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:50:23.172 02:19:31 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:50:23.172 02:19:31 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:50:23.172 ************************************ 00:50:23.172 END TEST ftl 00:50:23.172 ************************************ 00:50:23.172 00:50:23.172 real 12m10.112s 00:50:23.172 user 15m6.296s 00:50:23.172 sys 1m32.182s 00:50:23.172 02:19:31 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:50:23.172 02:19:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:50:23.172 02:19:31 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:50:23.172 02:19:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:50:23.172 02:19:31 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:50:23.172 02:19:31 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:50:23.172 02:19:31 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:50:23.172 02:19:31 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:50:23.172 02:19:31 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:50:23.172 02:19:31 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:50:23.172 02:19:31 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:50:23.172 02:19:31 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:50:23.172 02:19:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:50:23.172 02:19:31 -- common/autotest_common.sh@10 -- # set +x 00:50:23.172 02:19:31 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:50:23.172 02:19:31 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:50:23.172 02:19:31 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:50:23.172 02:19:31 -- common/autotest_common.sh@10 -- # set +x 00:50:24.548 INFO: APP EXITING 00:50:24.548 INFO: killing all VMs 00:50:24.548 INFO: killing vhost app 00:50:24.548 INFO: EXIT DONE 00:50:25.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:50:25.374 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:50:25.374 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:50:25.374 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:50:25.374 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:50:25.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:50:26.201 Cleaning 00:50:26.201 Removing: /var/run/dpdk/spdk0/config 00:50:26.201 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:50:26.201 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:50:26.201 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:50:26.201 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:50:26.201 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:50:26.201 Removing: /var/run/dpdk/spdk0/hugepage_info 00:50:26.201 Removing: /var/run/dpdk/spdk0 00:50:26.201 Removing: /var/run/dpdk/spdk_pid58092 00:50:26.201 Removing: /var/run/dpdk/spdk_pid58321 00:50:26.201 Removing: /var/run/dpdk/spdk_pid58556 00:50:26.201 Removing: /var/run/dpdk/spdk_pid58660 00:50:26.201 Removing: /var/run/dpdk/spdk_pid58716 00:50:26.201 Removing: /var/run/dpdk/spdk_pid58855 00:50:26.201 Removing: /var/run/dpdk/spdk_pid58873 00:50:26.201 Removing: /var/run/dpdk/spdk_pid59083 00:50:26.201 Removing: /var/run/dpdk/spdk_pid59200 00:50:26.201 Removing: /var/run/dpdk/spdk_pid59307 00:50:26.201 Removing: /var/run/dpdk/spdk_pid59435 00:50:26.201 Removing: /var/run/dpdk/spdk_pid59548 00:50:26.201 Removing: /var/run/dpdk/spdk_pid59588 00:50:26.201 Removing: /var/run/dpdk/spdk_pid59630 00:50:26.201 Removing: /var/run/dpdk/spdk_pid59706 00:50:26.201 Removing: /var/run/dpdk/spdk_pid59823 00:50:26.201 Removing: /var/run/dpdk/spdk_pid60305 00:50:26.201 Removing: /var/run/dpdk/spdk_pid60382 00:50:26.201 Removing: /var/run/dpdk/spdk_pid60457 00:50:26.201 Removing: /var/run/dpdk/spdk_pid60473 00:50:26.201 Removing: /var/run/dpdk/spdk_pid60634 00:50:26.201 Removing: /var/run/dpdk/spdk_pid60656 00:50:26.201 Removing: /var/run/dpdk/spdk_pid60811 00:50:26.201 Removing: /var/run/dpdk/spdk_pid60833 00:50:26.201 Removing: /var/run/dpdk/spdk_pid60899 00:50:26.201 Removing: /var/run/dpdk/spdk_pid60926 00:50:26.201 Removing: /var/run/dpdk/spdk_pid60990 00:50:26.201 Removing: /var/run/dpdk/spdk_pid61008 00:50:26.201 Removing: /var/run/dpdk/spdk_pid61209 00:50:26.201 Removing: /var/run/dpdk/spdk_pid61251 00:50:26.201 Removing: /var/run/dpdk/spdk_pid61340 00:50:26.201 Removing: /var/run/dpdk/spdk_pid61534 00:50:26.201 Removing: /var/run/dpdk/spdk_pid61629 00:50:26.201 Removing: /var/run/dpdk/spdk_pid61671 00:50:26.464 Removing: /var/run/dpdk/spdk_pid62158 00:50:26.464 Removing: /var/run/dpdk/spdk_pid62262 00:50:26.464 Removing: /var/run/dpdk/spdk_pid62382 00:50:26.464 Removing: /var/run/dpdk/spdk_pid62435 00:50:26.464 Removing: /var/run/dpdk/spdk_pid62472 00:50:26.464 Removing: /var/run/dpdk/spdk_pid62556 00:50:26.464 Removing: /var/run/dpdk/spdk_pid63197 00:50:26.464 Removing: /var/run/dpdk/spdk_pid63245 00:50:26.464 Removing: /var/run/dpdk/spdk_pid63776 00:50:26.464 Removing: /var/run/dpdk/spdk_pid63880 00:50:26.464 Removing: /var/run/dpdk/spdk_pid64005 00:50:26.464 Removing: /var/run/dpdk/spdk_pid64063 00:50:26.464 Removing: /var/run/dpdk/spdk_pid64094 00:50:26.464 Removing: /var/run/dpdk/spdk_pid64131 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66023 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66171 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66185 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66198 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66239 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66243 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66255 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66300 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66304 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66316 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66366 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66370 00:50:26.464 Removing: /var/run/dpdk/spdk_pid66382 00:50:26.464 Removing: /var/run/dpdk/spdk_pid67760 00:50:26.464 Removing: /var/run/dpdk/spdk_pid67881 00:50:26.464 Removing: /var/run/dpdk/spdk_pid69305 00:50:26.464 Removing: /var/run/dpdk/spdk_pid70679 00:50:26.464 Removing: /var/run/dpdk/spdk_pid70784 00:50:26.464 Removing: /var/run/dpdk/spdk_pid70888 00:50:26.464 Removing: /var/run/dpdk/spdk_pid70992 00:50:26.464 Removing: /var/run/dpdk/spdk_pid71119 00:50:26.464 Removing: /var/run/dpdk/spdk_pid71195 00:50:26.464 Removing: /var/run/dpdk/spdk_pid71341 00:50:26.464 Removing: /var/run/dpdk/spdk_pid71718 00:50:26.464 Removing: /var/run/dpdk/spdk_pid71749 00:50:26.464 Removing: /var/run/dpdk/spdk_pid72229 00:50:26.464 Removing: /var/run/dpdk/spdk_pid72413 00:50:26.464 Removing: /var/run/dpdk/spdk_pid72515 00:50:26.464 Removing: /var/run/dpdk/spdk_pid72626 00:50:26.464 Removing: /var/run/dpdk/spdk_pid72681 00:50:26.464 Removing: /var/run/dpdk/spdk_pid72712 00:50:26.464 Removing: /var/run/dpdk/spdk_pid73007 00:50:26.464 Removing: /var/run/dpdk/spdk_pid73074 00:50:26.465 Removing: /var/run/dpdk/spdk_pid73158 00:50:26.465 Removing: /var/run/dpdk/spdk_pid73577 00:50:26.465 Removing: /var/run/dpdk/spdk_pid73734 00:50:26.465 Removing: /var/run/dpdk/spdk_pid74532 00:50:26.465 Removing: /var/run/dpdk/spdk_pid74671 00:50:26.465 Removing: /var/run/dpdk/spdk_pid74875 00:50:26.465 Removing: /var/run/dpdk/spdk_pid74988 00:50:26.465 Removing: /var/run/dpdk/spdk_pid75352 00:50:26.465 Removing: /var/run/dpdk/spdk_pid75639 00:50:26.465 Removing: /var/run/dpdk/spdk_pid75985 00:50:26.465 Removing: /var/run/dpdk/spdk_pid76194 00:50:26.465 Removing: /var/run/dpdk/spdk_pid76335 00:50:26.465 Removing: /var/run/dpdk/spdk_pid76399 00:50:26.465 Removing: /var/run/dpdk/spdk_pid76540 00:50:26.465 Removing: /var/run/dpdk/spdk_pid76575 00:50:26.465 Removing: /var/run/dpdk/spdk_pid76640 00:50:26.465 Removing: /var/run/dpdk/spdk_pid76839 00:50:26.465 Removing: /var/run/dpdk/spdk_pid77085 00:50:26.465 Removing: /var/run/dpdk/spdk_pid77552 00:50:26.465 Removing: /var/run/dpdk/spdk_pid78054 00:50:26.465 Removing: /var/run/dpdk/spdk_pid78519 00:50:26.465 Removing: /var/run/dpdk/spdk_pid79074 00:50:26.465 Removing: /var/run/dpdk/spdk_pid79222 00:50:26.465 Removing: /var/run/dpdk/spdk_pid79319 00:50:26.465 Removing: /var/run/dpdk/spdk_pid80033 00:50:26.465 Removing: /var/run/dpdk/spdk_pid80106 00:50:26.465 Removing: /var/run/dpdk/spdk_pid80620 00:50:26.465 Removing: /var/run/dpdk/spdk_pid81046 00:50:26.465 Removing: /var/run/dpdk/spdk_pid81593 00:50:26.465 Removing: /var/run/dpdk/spdk_pid81729 00:50:26.465 Removing: /var/run/dpdk/spdk_pid81782 00:50:26.465 Removing: /var/run/dpdk/spdk_pid81848 00:50:26.465 Removing: /var/run/dpdk/spdk_pid81904 00:50:26.465 Removing: /var/run/dpdk/spdk_pid81968 00:50:26.465 Removing: /var/run/dpdk/spdk_pid82173 00:50:26.465 Removing: /var/run/dpdk/spdk_pid82248 00:50:26.465 Removing: /var/run/dpdk/spdk_pid82315 00:50:26.465 Removing: /var/run/dpdk/spdk_pid82387 00:50:26.465 Removing: /var/run/dpdk/spdk_pid82423 00:50:26.465 Removing: /var/run/dpdk/spdk_pid82492 00:50:26.465 Removing: /var/run/dpdk/spdk_pid82621 00:50:26.757 Clean 00:50:26.757 02:19:35 -- common/autotest_common.sh@1451 -- # return 0 00:50:26.757 02:19:35 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:50:26.757 02:19:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:50:26.757 02:19:35 -- common/autotest_common.sh@10 -- # set +x 00:50:26.757 02:19:35 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:50:26.757 02:19:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:50:26.757 02:19:35 -- common/autotest_common.sh@10 -- # set +x 00:50:26.757 02:19:35 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:50:26.757 02:19:35 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:50:26.757 02:19:35 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:50:26.757 02:19:35 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:50:26.757 02:19:35 -- spdk/autotest.sh@394 -- # hostname 00:50:26.757 02:19:35 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:50:27.021 geninfo: WARNING: invalid characters removed from testname! 00:50:48.947 02:19:57 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:52.234 02:20:01 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:54.768 02:20:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:57.304 02:20:05 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:59.206 02:20:07 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:01.737 02:20:10 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:04.269 02:20:12 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:51:04.269 02:20:12 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:51:04.269 02:20:12 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:51:04.269 02:20:12 -- common/autotest_common.sh@1681 -- $ lcov --version 00:51:04.269 02:20:12 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:51:04.269 02:20:12 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:51:04.269 02:20:12 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:51:04.269 02:20:12 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:51:04.269 02:20:12 -- scripts/common.sh@336 -- $ IFS=.-: 00:51:04.269 02:20:12 -- scripts/common.sh@336 -- $ read -ra ver1 00:51:04.269 02:20:12 -- scripts/common.sh@337 -- $ IFS=.-: 00:51:04.269 02:20:12 -- scripts/common.sh@337 -- $ read -ra ver2 00:51:04.269 02:20:12 -- scripts/common.sh@338 -- $ local 'op=<' 00:51:04.269 02:20:12 -- scripts/common.sh@340 -- $ ver1_l=2 00:51:04.269 02:20:12 -- scripts/common.sh@341 -- $ ver2_l=1 00:51:04.269 02:20:12 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:51:04.269 02:20:12 -- scripts/common.sh@344 -- $ case "$op" in 00:51:04.269 02:20:12 -- scripts/common.sh@345 -- $ : 1 00:51:04.269 02:20:12 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:51:04.269 02:20:12 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:04.269 02:20:12 -- scripts/common.sh@365 -- $ decimal 1 00:51:04.269 02:20:12 -- scripts/common.sh@353 -- $ local d=1 00:51:04.269 02:20:12 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:51:04.269 02:20:12 -- scripts/common.sh@355 -- $ echo 1 00:51:04.269 02:20:12 -- scripts/common.sh@365 -- $ ver1[v]=1 00:51:04.269 02:20:12 -- scripts/common.sh@366 -- $ decimal 2 00:51:04.269 02:20:12 -- scripts/common.sh@353 -- $ local d=2 00:51:04.269 02:20:12 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:51:04.269 02:20:12 -- scripts/common.sh@355 -- $ echo 2 00:51:04.269 02:20:12 -- scripts/common.sh@366 -- $ ver2[v]=2 00:51:04.269 02:20:12 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:51:04.269 02:20:12 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:51:04.269 02:20:12 -- scripts/common.sh@368 -- $ return 0 00:51:04.269 02:20:12 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:04.269 02:20:12 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:51:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:04.269 --rc genhtml_branch_coverage=1 00:51:04.269 --rc genhtml_function_coverage=1 00:51:04.269 --rc genhtml_legend=1 00:51:04.269 --rc geninfo_all_blocks=1 00:51:04.269 --rc geninfo_unexecuted_blocks=1 00:51:04.269 00:51:04.269 ' 00:51:04.269 02:20:12 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:51:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:04.269 --rc genhtml_branch_coverage=1 00:51:04.269 --rc genhtml_function_coverage=1 00:51:04.269 --rc genhtml_legend=1 00:51:04.269 --rc geninfo_all_blocks=1 00:51:04.269 --rc geninfo_unexecuted_blocks=1 00:51:04.269 00:51:04.269 ' 00:51:04.269 02:20:12 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:51:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:04.269 --rc genhtml_branch_coverage=1 00:51:04.269 --rc genhtml_function_coverage=1 00:51:04.269 --rc genhtml_legend=1 00:51:04.269 --rc geninfo_all_blocks=1 00:51:04.269 --rc geninfo_unexecuted_blocks=1 00:51:04.269 00:51:04.269 ' 00:51:04.269 02:20:12 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:51:04.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:04.269 --rc genhtml_branch_coverage=1 00:51:04.269 --rc genhtml_function_coverage=1 00:51:04.269 --rc genhtml_legend=1 00:51:04.269 --rc geninfo_all_blocks=1 00:51:04.269 --rc geninfo_unexecuted_blocks=1 00:51:04.269 00:51:04.269 ' 00:51:04.269 02:20:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:04.269 02:20:12 -- scripts/common.sh@15 -- $ shopt -s extglob 00:51:04.269 02:20:12 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:51:04.269 02:20:12 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:04.269 02:20:12 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:04.269 02:20:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:04.269 02:20:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:04.269 02:20:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:04.269 02:20:12 -- paths/export.sh@5 -- $ export PATH 00:51:04.269 02:20:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:04.269 02:20:12 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:51:04.269 02:20:12 -- common/autobuild_common.sh@486 -- $ date +%s 00:51:04.269 02:20:12 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728958812.XXXXXX 00:51:04.269 02:20:12 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728958812.gC8EsZ 00:51:04.269 02:20:12 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:51:04.269 02:20:12 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:51:04.269 02:20:12 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:51:04.269 02:20:12 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:51:04.269 02:20:12 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:51:04.269 02:20:12 -- common/autobuild_common.sh@502 -- $ get_config_params 00:51:04.269 02:20:12 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:51:04.269 02:20:12 -- common/autotest_common.sh@10 -- $ set +x 00:51:04.269 02:20:12 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:51:04.269 02:20:12 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:51:04.269 02:20:12 -- pm/common@17 -- $ local monitor 00:51:04.269 02:20:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:04.269 02:20:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:04.269 02:20:12 -- pm/common@25 -- $ sleep 1 00:51:04.269 02:20:12 -- pm/common@21 -- $ date +%s 00:51:04.269 02:20:12 -- pm/common@21 -- $ date +%s 00:51:04.269 02:20:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728958812 00:51:04.269 02:20:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728958812 00:51:04.269 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728958812_collect-cpu-load.pm.log 00:51:04.269 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728958812_collect-vmstat.pm.log 00:51:05.206 02:20:13 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:51:05.206 02:20:13 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:51:05.206 02:20:13 -- spdk/autopackage.sh@14 -- $ timing_finish 00:51:05.206 02:20:13 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:51:05.206 02:20:13 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:51:05.206 02:20:13 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:51:05.206 02:20:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:51:05.206 02:20:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:51:05.206 02:20:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:51:05.206 02:20:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:05.206 02:20:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:51:05.206 02:20:13 -- pm/common@44 -- $ pid=84377 00:51:05.206 02:20:13 -- pm/common@50 -- $ kill -TERM 84377 00:51:05.206 02:20:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:05.206 02:20:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:51:05.206 02:20:13 -- pm/common@44 -- $ pid=84378 00:51:05.206 02:20:13 -- pm/common@50 -- $ kill -TERM 84378 00:51:05.206 + [[ -n 5294 ]] 00:51:05.206 + sudo kill 5294 00:51:05.216 [Pipeline] } 00:51:05.232 [Pipeline] // timeout 00:51:05.238 [Pipeline] } 00:51:05.253 [Pipeline] // stage 00:51:05.258 [Pipeline] } 00:51:05.274 [Pipeline] // catchError 00:51:05.283 [Pipeline] stage 00:51:05.286 [Pipeline] { (Stop VM) 00:51:05.298 [Pipeline] sh 00:51:05.579 + vagrant halt 00:51:08.864 ==> default: Halting domain... 00:51:15.509 [Pipeline] sh 00:51:15.789 + vagrant destroy -f 00:51:18.320 ==> default: Removing domain... 00:51:18.900 [Pipeline] sh 00:51:19.179 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:51:19.186 [Pipeline] } 00:51:19.199 [Pipeline] // stage 00:51:19.203 [Pipeline] } 00:51:19.215 [Pipeline] // dir 00:51:19.219 [Pipeline] } 00:51:19.232 [Pipeline] // wrap 00:51:19.237 [Pipeline] } 00:51:19.247 [Pipeline] // catchError 00:51:19.255 [Pipeline] stage 00:51:19.257 [Pipeline] { (Epilogue) 00:51:19.267 [Pipeline] sh 00:51:19.546 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:51:24.824 [Pipeline] catchError 00:51:24.826 [Pipeline] { 00:51:24.839 [Pipeline] sh 00:51:25.121 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:51:25.122 Artifacts sizes are good 00:51:25.130 [Pipeline] } 00:51:25.143 [Pipeline] // catchError 00:51:25.154 [Pipeline] archiveArtifacts 00:51:25.161 Archiving artifacts 00:51:25.284 [Pipeline] cleanWs 00:51:25.297 [WS-CLEANUP] Deleting project workspace... 00:51:25.297 [WS-CLEANUP] Deferred wipeout is used... 00:51:25.303 [WS-CLEANUP] done 00:51:25.305 [Pipeline] } 00:51:25.320 [Pipeline] // stage 00:51:25.325 [Pipeline] } 00:51:25.338 [Pipeline] // node 00:51:25.343 [Pipeline] End of Pipeline 00:51:25.380 Finished: SUCCESS