00:00:00.001 Started by upstream project "autotest-per-patch" build number 131164 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.124 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.124 The recommended git tool is: git 00:00:00.125 using credential 00000000-0000-0000-0000-000000000002 00:00:00.127 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.198 Fetching changes from the remote Git repository 00:00:00.199 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.260 Using shallow fetch with depth 1 00:00:00.260 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.260 > git --version # timeout=10 00:00:00.304 > git --version # 'git version 2.39.2' 00:00:00.304 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.335 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.335 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.828 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.840 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.856 Checking out Revision bb1b9bfed281c179b06b3c39bbc702302ccac514 (FETCH_HEAD) 00:00:06.856 > git config core.sparsecheckout # timeout=10 00:00:06.867 > git read-tree -mu HEAD # timeout=10 00:00:06.883 > git checkout -f bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=5 00:00:06.900 Commit message: "scripts/kid: add issue 3551" 00:00:06.900 > git rev-list --no-walk bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=10 00:00:06.994 [Pipeline] Start of Pipeline 00:00:07.013 [Pipeline] library 00:00:07.015 Loading library shm_lib@master 00:00:07.015 Library shm_lib@master is cached. Copying from home. 00:00:07.036 [Pipeline] node 00:00:07.050 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:07.053 [Pipeline] { 00:00:07.063 [Pipeline] catchError 00:00:07.064 [Pipeline] { 00:00:07.074 [Pipeline] wrap 00:00:07.081 [Pipeline] { 00:00:07.087 [Pipeline] stage 00:00:07.088 [Pipeline] { (Prologue) 00:00:07.113 [Pipeline] echo 00:00:07.115 Node: VM-host-WFP1 00:00:07.123 [Pipeline] cleanWs 00:00:07.134 [WS-CLEANUP] Deleting project workspace... 00:00:07.134 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.142 [WS-CLEANUP] done 00:00:07.409 [Pipeline] setCustomBuildProperty 00:00:07.525 [Pipeline] httpRequest 00:00:08.687 [Pipeline] echo 00:00:08.688 Sorcerer 10.211.164.101 is alive 00:00:08.693 [Pipeline] retry 00:00:08.695 [Pipeline] { 00:00:08.703 [Pipeline] httpRequest 00:00:08.707 HttpMethod: GET 00:00:08.707 URL: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:08.708 Sending request to url: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:08.723 Response Code: HTTP/1.1 200 OK 00:00:08.723 Success: Status code 200 is in the accepted range: 200,404 00:00:08.724 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:13.901 [Pipeline] } 00:00:13.917 [Pipeline] // retry 00:00:13.925 [Pipeline] sh 00:00:14.210 + tar --no-same-owner -xf jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:14.227 [Pipeline] httpRequest 00:00:14.635 [Pipeline] echo 00:00:14.637 Sorcerer 10.211.164.101 is alive 00:00:14.647 [Pipeline] retry 00:00:14.649 [Pipeline] { 00:00:14.663 [Pipeline] httpRequest 00:00:14.668 HttpMethod: GET 00:00:14.669 URL: http://10.211.164.101/packages/spdk_1b00262276586b1a98819ff7de01d5125986edf4.tar.gz 00:00:14.669 Sending request to url: http://10.211.164.101/packages/spdk_1b00262276586b1a98819ff7de01d5125986edf4.tar.gz 00:00:14.684 Response Code: HTTP/1.1 200 OK 00:00:14.684 Success: Status code 200 is in the accepted range: 200,404 00:00:14.685 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_1b00262276586b1a98819ff7de01d5125986edf4.tar.gz 00:01:07.591 [Pipeline] } 00:01:07.609 [Pipeline] // retry 00:01:07.616 [Pipeline] sh 00:01:07.905 + tar --no-same-owner -xf spdk_1b00262276586b1a98819ff7de01d5125986edf4.tar.gz 00:01:10.484 [Pipeline] sh 00:01:10.768 + git -C spdk log --oneline -n5 00:01:10.768 1b0026227 bdev: Rename _bdev_memory_domain_io_get_buf() by bdev_io_get_bounce_buf() 00:01:10.768 fb13d4eaf bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:01:10.768 9ffd6d624 bdev: Factor out checking bounce buffer necessity into helper function 00:01:10.768 2308677da bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:01:10.768 f3e357702 bdev: Use data_block_size for upper layer buffer if no_metadata is true 00:01:10.787 [Pipeline] writeFile 00:01:10.802 [Pipeline] sh 00:01:11.086 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:11.097 [Pipeline] sh 00:01:11.380 + cat autorun-spdk.conf 00:01:11.380 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.380 SPDK_TEST_NVME=1 00:01:11.380 SPDK_TEST_FTL=1 00:01:11.380 SPDK_TEST_ISAL=1 00:01:11.380 SPDK_RUN_ASAN=1 00:01:11.380 SPDK_RUN_UBSAN=1 00:01:11.380 SPDK_TEST_XNVME=1 00:01:11.380 SPDK_TEST_NVME_FDP=1 00:01:11.380 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.388 RUN_NIGHTLY=0 00:01:11.390 [Pipeline] } 00:01:11.404 [Pipeline] // stage 00:01:11.420 [Pipeline] stage 00:01:11.423 [Pipeline] { (Run VM) 00:01:11.435 [Pipeline] sh 00:01:11.719 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:11.719 + echo 'Start stage prepare_nvme.sh' 00:01:11.719 Start stage prepare_nvme.sh 00:01:11.719 + [[ -n 4 ]] 00:01:11.719 + disk_prefix=ex4 00:01:11.719 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:01:11.719 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:01:11.719 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:01:11.719 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.719 ++ SPDK_TEST_NVME=1 00:01:11.719 ++ SPDK_TEST_FTL=1 00:01:11.719 ++ SPDK_TEST_ISAL=1 00:01:11.719 ++ SPDK_RUN_ASAN=1 00:01:11.719 ++ SPDK_RUN_UBSAN=1 00:01:11.719 ++ SPDK_TEST_XNVME=1 00:01:11.719 ++ SPDK_TEST_NVME_FDP=1 00:01:11.719 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.719 ++ RUN_NIGHTLY=0 00:01:11.719 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:01:11.719 + nvme_files=() 00:01:11.719 + declare -A nvme_files 00:01:11.719 + backend_dir=/var/lib/libvirt/images/backends 00:01:11.719 + nvme_files['nvme.img']=5G 00:01:11.719 + nvme_files['nvme-cmb.img']=5G 00:01:11.719 + nvme_files['nvme-multi0.img']=4G 00:01:11.719 + nvme_files['nvme-multi1.img']=4G 00:01:11.719 + nvme_files['nvme-multi2.img']=4G 00:01:11.719 + nvme_files['nvme-openstack.img']=8G 00:01:11.719 + nvme_files['nvme-zns.img']=5G 00:01:11.719 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:11.719 + (( SPDK_TEST_FTL == 1 )) 00:01:11.719 + nvme_files["nvme-ftl.img"]=6G 00:01:11.719 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:11.719 + nvme_files["nvme-fdp.img"]=1G 00:01:11.719 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:11.719 + for nvme in "${!nvme_files[@]}" 00:01:11.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:11.719 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.719 + for nvme in "${!nvme_files[@]}" 00:01:11.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:11.978 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:11.978 + for nvme in "${!nvme_files[@]}" 00:01:11.978 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:11.978 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.978 + for nvme in "${!nvme_files[@]}" 00:01:11.978 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:11.978 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:11.978 + for nvme in "${!nvme_files[@]}" 00:01:11.978 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:11.978 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.978 + for nvme in "${!nvme_files[@]}" 00:01:11.978 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:12.238 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.238 + for nvme in "${!nvme_files[@]}" 00:01:12.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:12.498 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.498 + for nvme in "${!nvme_files[@]}" 00:01:12.498 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:12.498 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:12.498 + for nvme in "${!nvme_files[@]}" 00:01:12.498 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:12.757 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.757 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:12.757 + echo 'End stage prepare_nvme.sh' 00:01:12.757 End stage prepare_nvme.sh 00:01:12.768 [Pipeline] sh 00:01:13.092 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:13.092 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:13.092 00:01:13.092 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:01:13.092 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:01:13.092 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:01:13.092 HELP=0 00:01:13.092 DRY_RUN=0 00:01:13.092 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:13.092 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:13.092 NVME_AUTO_CREATE=0 00:01:13.092 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:13.092 NVME_CMB=,,,, 00:01:13.092 NVME_PMR=,,,, 00:01:13.092 NVME_ZNS=,,,, 00:01:13.092 NVME_MS=true,,,, 00:01:13.092 NVME_FDP=,,,on, 00:01:13.092 SPDK_VAGRANT_DISTRO=fedora39 00:01:13.092 SPDK_VAGRANT_VMCPU=10 00:01:13.092 SPDK_VAGRANT_VMRAM=12288 00:01:13.092 SPDK_VAGRANT_PROVIDER=libvirt 00:01:13.092 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:13.092 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:13.092 SPDK_OPENSTACK_NETWORK=0 00:01:13.092 VAGRANT_PACKAGE_BOX=0 00:01:13.092 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:13.092 FORCE_DISTRO=true 00:01:13.092 VAGRANT_BOX_VERSION= 00:01:13.092 EXTRA_VAGRANTFILES= 00:01:13.092 NIC_MODEL=e1000 00:01:13.092 00:01:13.092 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:01:13.092 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:15.627 Bringing machine 'default' up with 'libvirt' provider... 00:01:17.008 ==> default: Creating image (snapshot of base box volume). 00:01:17.008 ==> default: Creating domain with the following settings... 00:01:17.008 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728966305_5a255d2f0b66638388d3 00:01:17.008 ==> default: -- Domain type: kvm 00:01:17.008 ==> default: -- Cpus: 10 00:01:17.008 ==> default: -- Feature: acpi 00:01:17.008 ==> default: -- Feature: apic 00:01:17.008 ==> default: -- Feature: pae 00:01:17.008 ==> default: -- Memory: 12288M 00:01:17.008 ==> default: -- Memory Backing: hugepages: 00:01:17.008 ==> default: -- Management MAC: 00:01:17.008 ==> default: -- Loader: 00:01:17.008 ==> default: -- Nvram: 00:01:17.008 ==> default: -- Base box: spdk/fedora39 00:01:17.008 ==> default: -- Storage pool: default 00:01:17.008 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728966305_5a255d2f0b66638388d3.img (20G) 00:01:17.008 ==> default: -- Volume Cache: default 00:01:17.008 ==> default: -- Kernel: 00:01:17.008 ==> default: -- Initrd: 00:01:17.008 ==> default: -- Graphics Type: vnc 00:01:17.008 ==> default: -- Graphics Port: -1 00:01:17.008 ==> default: -- Graphics IP: 127.0.0.1 00:01:17.008 ==> default: -- Graphics Password: Not defined 00:01:17.008 ==> default: -- Video Type: cirrus 00:01:17.008 ==> default: -- Video VRAM: 9216 00:01:17.008 ==> default: -- Sound Type: 00:01:17.008 ==> default: -- Keymap: en-us 00:01:17.008 ==> default: -- TPM Path: 00:01:17.008 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:17.008 ==> default: -- Command line args: 00:01:17.008 ==> default: -> value=-device, 00:01:17.008 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:17.008 ==> default: -> value=-drive, 00:01:17.008 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:17.008 ==> default: -> value=-device, 00:01:17.008 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:17.008 ==> default: -> value=-device, 00:01:17.008 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:17.008 ==> default: -> value=-drive, 00:01:17.008 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:17.008 ==> default: -> value=-device, 00:01:17.008 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.008 ==> default: -> value=-device, 00:01:17.008 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:17.008 ==> default: -> value=-drive, 00:01:17.008 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:17.008 ==> default: -> value=-device, 00:01:17.008 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.008 ==> default: -> value=-drive, 00:01:17.008 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:17.008 ==> default: -> value=-device, 00:01:17.008 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.008 ==> default: -> value=-drive, 00:01:17.008 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:17.008 ==> default: -> value=-device, 00:01:17.008 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.008 ==> default: -> value=-device, 00:01:17.008 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:17.008 ==> default: -> value=-device, 00:01:17.008 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:17.008 ==> default: -> value=-drive, 00:01:17.008 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:17.008 ==> default: -> value=-device, 00:01:17.008 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.267 ==> default: Creating shared folders metadata... 00:01:17.628 ==> default: Starting domain. 00:01:19.534 ==> default: Waiting for domain to get an IP address... 00:01:37.629 ==> default: Waiting for SSH to become available... 00:01:39.013 ==> default: Configuring and enabling network interfaces... 00:01:45.583 default: SSH address: 192.168.121.152:22 00:01:45.583 default: SSH username: vagrant 00:01:45.583 default: SSH auth method: private key 00:01:47.488 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:55.663 ==> default: Mounting SSHFS shared folder... 00:01:58.198 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:58.198 ==> default: Checking Mount.. 00:01:59.575 ==> default: Folder Successfully Mounted! 00:01:59.575 ==> default: Running provisioner: file... 00:02:00.511 default: ~/.gitconfig => .gitconfig 00:02:01.079 00:02:01.079 SUCCESS! 00:02:01.079 00:02:01.079 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:02:01.079 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:01.079 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:02:01.079 00:02:01.088 [Pipeline] } 00:02:01.104 [Pipeline] // stage 00:02:01.113 [Pipeline] dir 00:02:01.114 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:02:01.115 [Pipeline] { 00:02:01.127 [Pipeline] catchError 00:02:01.129 [Pipeline] { 00:02:01.142 [Pipeline] sh 00:02:01.423 + vagrant ssh-config --host vagrant 00:02:01.424 + sed -ne /^Host/,$p 00:02:01.424 + tee ssh_conf 00:02:03.961 Host vagrant 00:02:03.961 HostName 192.168.121.152 00:02:03.961 User vagrant 00:02:03.961 Port 22 00:02:03.961 UserKnownHostsFile /dev/null 00:02:03.961 StrictHostKeyChecking no 00:02:03.961 PasswordAuthentication no 00:02:03.961 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:03.962 IdentitiesOnly yes 00:02:03.962 LogLevel FATAL 00:02:03.962 ForwardAgent yes 00:02:03.962 ForwardX11 yes 00:02:03.962 00:02:03.976 [Pipeline] withEnv 00:02:03.979 [Pipeline] { 00:02:03.995 [Pipeline] sh 00:02:04.290 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:04.290 source /etc/os-release 00:02:04.290 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.290 # Minimal, systemd-like check. 00:02:04.290 if [[ -e /.dockerenv ]]; then 00:02:04.290 # Clear garbage from the node's name: 00:02:04.290 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.290 # $HOSTNAME is the actual container id 00:02:04.290 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.290 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.290 # We can assume this is a mount from a host where container is running, 00:02:04.290 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.290 container="$(< /etc/hostname) ($agent)" 00:02:04.290 else 00:02:04.290 # Fallback 00:02:04.290 container=$agent 00:02:04.290 fi 00:02:04.290 fi 00:02:04.290 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.290 00:02:04.561 [Pipeline] } 00:02:04.577 [Pipeline] // withEnv 00:02:04.586 [Pipeline] setCustomBuildProperty 00:02:04.601 [Pipeline] stage 00:02:04.603 [Pipeline] { (Tests) 00:02:04.621 [Pipeline] sh 00:02:04.946 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.216 [Pipeline] sh 00:02:05.497 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:05.771 [Pipeline] timeout 00:02:05.771 Timeout set to expire in 50 min 00:02:05.773 [Pipeline] { 00:02:05.789 [Pipeline] sh 00:02:06.067 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:06.636 HEAD is now at 1b0026227 bdev: Rename _bdev_memory_domain_io_get_buf() by bdev_io_get_bounce_buf() 00:02:06.648 [Pipeline] sh 00:02:06.930 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:07.210 [Pipeline] sh 00:02:07.495 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:07.769 [Pipeline] sh 00:02:08.047 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:08.305 ++ readlink -f spdk_repo 00:02:08.305 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.305 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.305 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.305 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.305 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.305 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.305 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.305 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:08.305 + cd /home/vagrant/spdk_repo 00:02:08.305 + source /etc/os-release 00:02:08.305 ++ NAME='Fedora Linux' 00:02:08.305 ++ VERSION='39 (Cloud Edition)' 00:02:08.305 ++ ID=fedora 00:02:08.305 ++ VERSION_ID=39 00:02:08.305 ++ VERSION_CODENAME= 00:02:08.305 ++ PLATFORM_ID=platform:f39 00:02:08.305 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:08.305 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.305 ++ LOGO=fedora-logo-icon 00:02:08.305 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:08.305 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.305 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:08.305 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.305 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.305 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.305 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:08.305 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.305 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:08.305 ++ SUPPORT_END=2024-11-12 00:02:08.305 ++ VARIANT='Cloud Edition' 00:02:08.305 ++ VARIANT_ID=cloud 00:02:08.305 + uname -a 00:02:08.305 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:08.305 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:08.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:09.130 Hugepages 00:02:09.130 node hugesize free / total 00:02:09.130 node0 1048576kB 0 / 0 00:02:09.130 node0 2048kB 0 / 0 00:02:09.130 00:02:09.130 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.130 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.130 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:09.130 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:09.130 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:09.130 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:09.130 + rm -f /tmp/spdk-ld-path 00:02:09.130 + source autorun-spdk.conf 00:02:09.130 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.130 ++ SPDK_TEST_NVME=1 00:02:09.130 ++ SPDK_TEST_FTL=1 00:02:09.130 ++ SPDK_TEST_ISAL=1 00:02:09.130 ++ SPDK_RUN_ASAN=1 00:02:09.130 ++ SPDK_RUN_UBSAN=1 00:02:09.130 ++ SPDK_TEST_XNVME=1 00:02:09.130 ++ SPDK_TEST_NVME_FDP=1 00:02:09.130 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.130 ++ RUN_NIGHTLY=0 00:02:09.130 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.130 + [[ -n '' ]] 00:02:09.130 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:09.130 + for M in /var/spdk/build-*-manifest.txt 00:02:09.130 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:09.130 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.130 + for M in /var/spdk/build-*-manifest.txt 00:02:09.130 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.130 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.130 + for M in /var/spdk/build-*-manifest.txt 00:02:09.130 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.130 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.389 ++ uname 00:02:09.389 + [[ Linux == \L\i\n\u\x ]] 00:02:09.389 + sudo dmesg -T 00:02:09.389 + sudo dmesg --clear 00:02:09.389 + dmesg_pid=5241 00:02:09.389 + [[ Fedora Linux == FreeBSD ]] 00:02:09.389 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.389 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.389 + sudo dmesg -Tw 00:02:09.389 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:09.389 + [[ -x /usr/src/fio-static/fio ]] 00:02:09.389 + export FIO_BIN=/usr/src/fio-static/fio 00:02:09.389 + FIO_BIN=/usr/src/fio-static/fio 00:02:09.389 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:09.389 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:09.389 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:09.389 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.389 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.389 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:09.389 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.389 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.389 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.389 Test configuration: 00:02:09.389 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.389 SPDK_TEST_NVME=1 00:02:09.389 SPDK_TEST_FTL=1 00:02:09.389 SPDK_TEST_ISAL=1 00:02:09.389 SPDK_RUN_ASAN=1 00:02:09.389 SPDK_RUN_UBSAN=1 00:02:09.389 SPDK_TEST_XNVME=1 00:02:09.389 SPDK_TEST_NVME_FDP=1 00:02:09.389 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.389 RUN_NIGHTLY=0 04:25:58 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:09.389 04:25:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:09.389 04:25:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:09.389 04:25:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.389 04:25:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.389 04:25:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.389 04:25:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.389 04:25:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.389 04:25:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.389 04:25:58 -- paths/export.sh@5 -- $ export PATH 00:02:09.389 04:25:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.389 04:25:58 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:09.389 04:25:58 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:09.389 04:25:58 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728966358.XXXXXX 00:02:09.389 04:25:58 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728966358.6QuFnm 00:02:09.389 04:25:58 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:09.389 04:25:58 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:09.389 04:25:58 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:09.389 04:25:58 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:09.389 04:25:58 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.389 04:25:58 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:09.389 04:25:58 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:09.389 04:25:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.647 04:25:58 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:09.648 04:25:58 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:09.648 04:25:58 -- pm/common@17 -- $ local monitor 00:02:09.648 04:25:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.648 04:25:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.648 04:25:58 -- pm/common@25 -- $ sleep 1 00:02:09.648 04:25:58 -- pm/common@21 -- $ date +%s 00:02:09.648 04:25:58 -- pm/common@21 -- $ date +%s 00:02:09.648 04:25:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728966358 00:02:09.648 04:25:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728966358 00:02:09.648 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728966358_collect-cpu-load.pm.log 00:02:09.648 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728966358_collect-vmstat.pm.log 00:02:10.587 04:25:59 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:10.587 04:25:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.587 04:25:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.587 04:25:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.587 04:25:59 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.587 Tue Oct 15 04:25:59 AM UTC 2024 00:02:10.587 04:25:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.587 v25.01-pre-61-g1b0026227 00:02:10.587 04:25:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:10.587 04:25:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:10.587 04:25:59 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:10.587 04:25:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:10.587 04:25:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.587 ************************************ 00:02:10.587 START TEST asan 00:02:10.587 ************************************ 00:02:10.587 using asan 00:02:10.587 04:25:59 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:10.587 00:02:10.587 real 0m0.000s 00:02:10.587 user 0m0.000s 00:02:10.587 sys 0m0.000s 00:02:10.587 04:25:59 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:10.587 04:25:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.587 ************************************ 00:02:10.587 END TEST asan 00:02:10.587 ************************************ 00:02:10.587 04:26:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.587 04:26:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.587 04:26:00 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:10.587 04:26:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:10.587 04:26:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.587 ************************************ 00:02:10.587 START TEST ubsan 00:02:10.587 ************************************ 00:02:10.587 using ubsan 00:02:10.587 04:26:00 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:10.587 00:02:10.587 real 0m0.000s 00:02:10.587 user 0m0.000s 00:02:10.587 sys 0m0.000s 00:02:10.587 04:26:00 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:10.587 04:26:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.587 ************************************ 00:02:10.587 END TEST ubsan 00:02:10.587 ************************************ 00:02:10.587 04:26:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:10.587 04:26:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:10.587 04:26:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:10.587 04:26:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:10.587 04:26:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:10.587 04:26:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:10.587 04:26:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:10.587 04:26:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:10.587 04:26:00 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:10.847 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:10.847 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.414 Using 'verbs' RDMA provider 00:02:27.256 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:45.363 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:45.363 Creating mk/config.mk...done. 00:02:45.363 Creating mk/cc.flags.mk...done. 00:02:45.363 Type 'make' to build. 00:02:45.363 04:26:33 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:45.363 04:26:33 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:45.363 04:26:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:45.363 04:26:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.363 ************************************ 00:02:45.363 START TEST make 00:02:45.363 ************************************ 00:02:45.363 04:26:33 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:45.363 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:45.363 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:45.363 meson setup builddir \ 00:02:45.363 -Dwith-libaio=enabled \ 00:02:45.363 -Dwith-liburing=enabled \ 00:02:45.363 -Dwith-libvfn=disabled \ 00:02:45.363 -Dwith-spdk=false && \ 00:02:45.363 meson compile -C builddir && \ 00:02:45.363 cd -) 00:02:45.363 make[1]: Nothing to be done for 'all'. 00:02:46.739 The Meson build system 00:02:46.739 Version: 1.5.0 00:02:46.739 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:46.739 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:46.739 Build type: native build 00:02:46.739 Project name: xnvme 00:02:46.739 Project version: 0.7.3 00:02:46.739 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:46.739 C linker for the host machine: cc ld.bfd 2.40-14 00:02:46.739 Host machine cpu family: x86_64 00:02:46.739 Host machine cpu: x86_64 00:02:46.739 Message: host_machine.system: linux 00:02:46.739 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:46.739 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:46.739 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:46.739 Run-time dependency threads found: YES 00:02:46.739 Has header "setupapi.h" : NO 00:02:46.739 Has header "linux/blkzoned.h" : YES 00:02:46.739 Has header "linux/blkzoned.h" : YES (cached) 00:02:46.739 Has header "libaio.h" : YES 00:02:46.739 Library aio found: YES 00:02:46.739 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:46.739 Run-time dependency liburing found: YES 2.2 00:02:46.739 Dependency libvfn skipped: feature with-libvfn disabled 00:02:46.739 Run-time dependency appleframeworks found: NO (tried framework) 00:02:46.739 Run-time dependency appleframeworks found: NO (tried framework) 00:02:46.739 Configuring xnvme_config.h using configuration 00:02:46.739 Configuring xnvme.spec using configuration 00:02:46.739 Run-time dependency bash-completion found: YES 2.11 00:02:46.739 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:46.739 Program cp found: YES (/usr/bin/cp) 00:02:46.739 Has header "winsock2.h" : NO 00:02:46.739 Has header "dbghelp.h" : NO 00:02:46.739 Library rpcrt4 found: NO 00:02:46.739 Library rt found: YES 00:02:46.739 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:46.739 Found CMake: /usr/bin/cmake (3.27.7) 00:02:46.739 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:46.739 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:46.739 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:46.739 Build targets in project: 32 00:02:46.739 00:02:46.739 xnvme 0.7.3 00:02:46.739 00:02:46.739 User defined options 00:02:46.739 with-libaio : enabled 00:02:46.739 with-liburing: enabled 00:02:46.739 with-libvfn : disabled 00:02:46.739 with-spdk : false 00:02:46.739 00:02:46.739 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:46.998 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:46.998 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:46.998 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:46.998 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:46.998 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:46.998 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:46.998 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:46.998 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:47.256 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:47.256 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:47.256 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:47.256 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:47.256 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:47.256 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:47.256 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:47.256 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:47.256 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:47.256 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:47.256 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:47.256 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:47.256 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:47.256 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:47.256 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:47.256 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:47.256 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:47.256 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:47.256 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:47.256 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:47.256 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:47.516 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:47.516 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:47.516 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:47.516 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:47.516 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:47.516 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:47.516 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:47.516 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:47.516 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:47.516 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:47.516 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:47.516 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:47.516 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:47.516 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:47.516 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:47.516 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:47.516 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:47.516 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:47.516 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:47.516 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:47.516 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:47.516 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:47.516 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:47.516 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:47.516 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:47.516 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:47.516 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:47.516 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:47.516 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:47.516 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:47.516 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:47.516 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:47.778 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:47.778 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:47.778 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:47.778 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:47.778 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:47.778 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:47.778 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:47.778 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:47.778 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:47.778 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:47.778 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:47.778 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:47.778 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:47.778 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:47.778 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:47.778 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:47.778 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:48.036 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:48.036 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:48.036 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:48.036 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:48.036 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:48.036 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:48.036 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:48.036 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:48.036 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:48.036 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:48.036 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:48.036 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:48.036 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:48.036 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:48.036 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:48.036 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:48.036 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:48.036 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:48.036 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:48.036 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:48.036 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:48.037 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:48.294 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:48.294 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:48.294 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:48.294 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:48.294 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:48.294 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:48.294 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:48.294 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:48.294 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:48.294 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:48.294 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:48.294 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:48.294 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:48.294 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:48.294 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:48.294 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:48.294 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:48.295 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:48.295 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:48.295 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:48.295 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:48.295 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:48.295 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:48.295 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:48.295 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:48.295 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:48.295 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:48.295 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:48.295 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:48.295 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:48.295 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:48.295 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:48.552 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:48.552 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:48.552 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:48.552 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:48.552 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:48.552 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:48.552 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:48.552 [139/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:48.552 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:48.552 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:48.552 [142/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:48.552 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:48.552 [144/203] Linking target lib/libxnvme.so 00:02:48.810 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:48.810 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:48.810 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:48.810 [148/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:48.810 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:48.810 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:48.810 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:48.810 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:48.810 [153/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:48.810 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:48.810 [155/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:48.810 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:48.810 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:48.810 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:48.810 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:48.810 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:48.810 [161/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:49.068 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:49.068 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:49.068 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:49.068 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:49.068 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:49.068 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:49.068 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:49.068 [169/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:49.068 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:49.068 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:49.068 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:49.326 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:49.326 [174/203] Linking static target lib/libxnvme.a 00:02:49.326 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:49.326 [176/203] Linking target tests/xnvme_tests_enum 00:02:49.326 [177/203] Linking target tests/xnvme_tests_xnvme_file 00:02:49.326 [178/203] Linking target tests/xnvme_tests_lblk 00:02:49.326 [179/203] Linking target tests/xnvme_tests_cli 00:02:49.326 [180/203] Linking target tests/xnvme_tests_scc 00:02:49.326 [181/203] Linking target tests/xnvme_tests_ioworker 00:02:49.326 [182/203] Linking target tests/xnvme_tests_buf 00:02:49.326 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:49.326 [184/203] Linking target tests/xnvme_tests_znd_append 00:02:49.326 [185/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:49.326 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:49.326 [187/203] Linking target tests/xnvme_tests_znd_state 00:02:49.326 [188/203] Linking target tools/zoned 00:02:49.326 [189/203] Linking target tests/xnvme_tests_kvs 00:02:49.326 [190/203] Linking target tests/xnvme_tests_map 00:02:49.326 [191/203] Linking target tools/xdd 00:02:49.326 [192/203] Linking target tools/lblk 00:02:49.326 [193/203] Linking target examples/xnvme_enum 00:02:49.326 [194/203] Linking target tools/xnvme 00:02:49.326 [195/203] Linking target tools/xnvme_file 00:02:49.326 [196/203] Linking target examples/xnvme_dev 00:02:49.326 [197/203] Linking target tools/kvs 00:02:49.326 [198/203] Linking target examples/xnvme_hello 00:02:49.326 [199/203] Linking target examples/xnvme_single_async 00:02:49.326 [200/203] Linking target examples/zoned_io_sync 00:02:49.326 [201/203] Linking target examples/xnvme_io_async 00:02:49.326 [202/203] Linking target examples/zoned_io_async 00:02:49.326 [203/203] Linking target examples/xnvme_single_sync 00:02:49.326 INFO: autodetecting backend as ninja 00:02:49.326 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:49.583 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:56.140 The Meson build system 00:02:56.140 Version: 1.5.0 00:02:56.140 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:56.140 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:56.140 Build type: native build 00:02:56.140 Program cat found: YES (/usr/bin/cat) 00:02:56.140 Project name: DPDK 00:02:56.140 Project version: 24.03.0 00:02:56.140 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:56.140 C linker for the host machine: cc ld.bfd 2.40-14 00:02:56.140 Host machine cpu family: x86_64 00:02:56.140 Host machine cpu: x86_64 00:02:56.140 Message: ## Building in Developer Mode ## 00:02:56.140 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:56.140 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:56.140 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:56.140 Program python3 found: YES (/usr/bin/python3) 00:02:56.140 Program cat found: YES (/usr/bin/cat) 00:02:56.140 Compiler for C supports arguments -march=native: YES 00:02:56.140 Checking for size of "void *" : 8 00:02:56.140 Checking for size of "void *" : 8 (cached) 00:02:56.140 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:56.140 Library m found: YES 00:02:56.140 Library numa found: YES 00:02:56.140 Has header "numaif.h" : YES 00:02:56.140 Library fdt found: NO 00:02:56.140 Library execinfo found: NO 00:02:56.140 Has header "execinfo.h" : YES 00:02:56.140 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:56.140 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:56.140 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:56.140 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:56.140 Run-time dependency openssl found: YES 3.1.1 00:02:56.140 Run-time dependency libpcap found: YES 1.10.4 00:02:56.140 Has header "pcap.h" with dependency libpcap: YES 00:02:56.140 Compiler for C supports arguments -Wcast-qual: YES 00:02:56.140 Compiler for C supports arguments -Wdeprecated: YES 00:02:56.140 Compiler for C supports arguments -Wformat: YES 00:02:56.140 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:56.140 Compiler for C supports arguments -Wformat-security: NO 00:02:56.140 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:56.140 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:56.140 Compiler for C supports arguments -Wnested-externs: YES 00:02:56.140 Compiler for C supports arguments -Wold-style-definition: YES 00:02:56.140 Compiler for C supports arguments -Wpointer-arith: YES 00:02:56.140 Compiler for C supports arguments -Wsign-compare: YES 00:02:56.140 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:56.140 Compiler for C supports arguments -Wundef: YES 00:02:56.140 Compiler for C supports arguments -Wwrite-strings: YES 00:02:56.140 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:56.140 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:56.140 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:56.140 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:56.140 Program objdump found: YES (/usr/bin/objdump) 00:02:56.140 Compiler for C supports arguments -mavx512f: YES 00:02:56.140 Checking if "AVX512 checking" compiles: YES 00:02:56.140 Fetching value of define "__SSE4_2__" : 1 00:02:56.140 Fetching value of define "__AES__" : 1 00:02:56.140 Fetching value of define "__AVX__" : 1 00:02:56.140 Fetching value of define "__AVX2__" : 1 00:02:56.140 Fetching value of define "__AVX512BW__" : 1 00:02:56.140 Fetching value of define "__AVX512CD__" : 1 00:02:56.140 Fetching value of define "__AVX512DQ__" : 1 00:02:56.140 Fetching value of define "__AVX512F__" : 1 00:02:56.140 Fetching value of define "__AVX512VL__" : 1 00:02:56.140 Fetching value of define "__PCLMUL__" : 1 00:02:56.140 Fetching value of define "__RDRND__" : 1 00:02:56.140 Fetching value of define "__RDSEED__" : 1 00:02:56.140 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:56.140 Fetching value of define "__znver1__" : (undefined) 00:02:56.140 Fetching value of define "__znver2__" : (undefined) 00:02:56.140 Fetching value of define "__znver3__" : (undefined) 00:02:56.140 Fetching value of define "__znver4__" : (undefined) 00:02:56.140 Library asan found: YES 00:02:56.140 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:56.140 Message: lib/log: Defining dependency "log" 00:02:56.140 Message: lib/kvargs: Defining dependency "kvargs" 00:02:56.140 Message: lib/telemetry: Defining dependency "telemetry" 00:02:56.140 Library rt found: YES 00:02:56.140 Checking for function "getentropy" : NO 00:02:56.140 Message: lib/eal: Defining dependency "eal" 00:02:56.140 Message: lib/ring: Defining dependency "ring" 00:02:56.140 Message: lib/rcu: Defining dependency "rcu" 00:02:56.140 Message: lib/mempool: Defining dependency "mempool" 00:02:56.140 Message: lib/mbuf: Defining dependency "mbuf" 00:02:56.140 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:56.140 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:56.140 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:56.140 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:56.140 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:56.140 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:56.140 Compiler for C supports arguments -mpclmul: YES 00:02:56.140 Compiler for C supports arguments -maes: YES 00:02:56.140 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:56.140 Compiler for C supports arguments -mavx512bw: YES 00:02:56.140 Compiler for C supports arguments -mavx512dq: YES 00:02:56.140 Compiler for C supports arguments -mavx512vl: YES 00:02:56.140 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:56.140 Compiler for C supports arguments -mavx2: YES 00:02:56.140 Compiler for C supports arguments -mavx: YES 00:02:56.140 Message: lib/net: Defining dependency "net" 00:02:56.140 Message: lib/meter: Defining dependency "meter" 00:02:56.140 Message: lib/ethdev: Defining dependency "ethdev" 00:02:56.140 Message: lib/pci: Defining dependency "pci" 00:02:56.140 Message: lib/cmdline: Defining dependency "cmdline" 00:02:56.140 Message: lib/hash: Defining dependency "hash" 00:02:56.140 Message: lib/timer: Defining dependency "timer" 00:02:56.140 Message: lib/compressdev: Defining dependency "compressdev" 00:02:56.140 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:56.140 Message: lib/dmadev: Defining dependency "dmadev" 00:02:56.140 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:56.140 Message: lib/power: Defining dependency "power" 00:02:56.140 Message: lib/reorder: Defining dependency "reorder" 00:02:56.140 Message: lib/security: Defining dependency "security" 00:02:56.140 Has header "linux/userfaultfd.h" : YES 00:02:56.140 Has header "linux/vduse.h" : YES 00:02:56.140 Message: lib/vhost: Defining dependency "vhost" 00:02:56.140 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:56.140 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:56.140 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:56.140 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:56.140 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:56.140 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:56.140 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:56.140 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:56.140 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:56.140 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:56.140 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:56.140 Configuring doxy-api-html.conf using configuration 00:02:56.140 Configuring doxy-api-man.conf using configuration 00:02:56.140 Program mandb found: YES (/usr/bin/mandb) 00:02:56.140 Program sphinx-build found: NO 00:02:56.140 Configuring rte_build_config.h using configuration 00:02:56.140 Message: 00:02:56.140 ================= 00:02:56.140 Applications Enabled 00:02:56.140 ================= 00:02:56.140 00:02:56.140 apps: 00:02:56.140 00:02:56.140 00:02:56.140 Message: 00:02:56.140 ================= 00:02:56.140 Libraries Enabled 00:02:56.140 ================= 00:02:56.140 00:02:56.140 libs: 00:02:56.140 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:56.140 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:56.140 cryptodev, dmadev, power, reorder, security, vhost, 00:02:56.140 00:02:56.140 Message: 00:02:56.140 =============== 00:02:56.140 Drivers Enabled 00:02:56.140 =============== 00:02:56.140 00:02:56.140 common: 00:02:56.140 00:02:56.140 bus: 00:02:56.140 pci, vdev, 00:02:56.140 mempool: 00:02:56.140 ring, 00:02:56.140 dma: 00:02:56.140 00:02:56.140 net: 00:02:56.140 00:02:56.140 crypto: 00:02:56.140 00:02:56.140 compress: 00:02:56.140 00:02:56.140 vdpa: 00:02:56.140 00:02:56.140 00:02:56.140 Message: 00:02:56.140 ================= 00:02:56.140 Content Skipped 00:02:56.140 ================= 00:02:56.140 00:02:56.140 apps: 00:02:56.140 dumpcap: explicitly disabled via build config 00:02:56.140 graph: explicitly disabled via build config 00:02:56.140 pdump: explicitly disabled via build config 00:02:56.140 proc-info: explicitly disabled via build config 00:02:56.140 test-acl: explicitly disabled via build config 00:02:56.140 test-bbdev: explicitly disabled via build config 00:02:56.140 test-cmdline: explicitly disabled via build config 00:02:56.140 test-compress-perf: explicitly disabled via build config 00:02:56.141 test-crypto-perf: explicitly disabled via build config 00:02:56.141 test-dma-perf: explicitly disabled via build config 00:02:56.141 test-eventdev: explicitly disabled via build config 00:02:56.141 test-fib: explicitly disabled via build config 00:02:56.141 test-flow-perf: explicitly disabled via build config 00:02:56.141 test-gpudev: explicitly disabled via build config 00:02:56.141 test-mldev: explicitly disabled via build config 00:02:56.141 test-pipeline: explicitly disabled via build config 00:02:56.141 test-pmd: explicitly disabled via build config 00:02:56.141 test-regex: explicitly disabled via build config 00:02:56.141 test-sad: explicitly disabled via build config 00:02:56.141 test-security-perf: explicitly disabled via build config 00:02:56.141 00:02:56.141 libs: 00:02:56.141 argparse: explicitly disabled via build config 00:02:56.141 metrics: explicitly disabled via build config 00:02:56.141 acl: explicitly disabled via build config 00:02:56.141 bbdev: explicitly disabled via build config 00:02:56.141 bitratestats: explicitly disabled via build config 00:02:56.141 bpf: explicitly disabled via build config 00:02:56.141 cfgfile: explicitly disabled via build config 00:02:56.141 distributor: explicitly disabled via build config 00:02:56.141 efd: explicitly disabled via build config 00:02:56.141 eventdev: explicitly disabled via build config 00:02:56.141 dispatcher: explicitly disabled via build config 00:02:56.141 gpudev: explicitly disabled via build config 00:02:56.141 gro: explicitly disabled via build config 00:02:56.141 gso: explicitly disabled via build config 00:02:56.141 ip_frag: explicitly disabled via build config 00:02:56.141 jobstats: explicitly disabled via build config 00:02:56.141 latencystats: explicitly disabled via build config 00:02:56.141 lpm: explicitly disabled via build config 00:02:56.141 member: explicitly disabled via build config 00:02:56.141 pcapng: explicitly disabled via build config 00:02:56.141 rawdev: explicitly disabled via build config 00:02:56.141 regexdev: explicitly disabled via build config 00:02:56.141 mldev: explicitly disabled via build config 00:02:56.141 rib: explicitly disabled via build config 00:02:56.141 sched: explicitly disabled via build config 00:02:56.141 stack: explicitly disabled via build config 00:02:56.141 ipsec: explicitly disabled via build config 00:02:56.141 pdcp: explicitly disabled via build config 00:02:56.141 fib: explicitly disabled via build config 00:02:56.141 port: explicitly disabled via build config 00:02:56.141 pdump: explicitly disabled via build config 00:02:56.141 table: explicitly disabled via build config 00:02:56.141 pipeline: explicitly disabled via build config 00:02:56.141 graph: explicitly disabled via build config 00:02:56.141 node: explicitly disabled via build config 00:02:56.141 00:02:56.141 drivers: 00:02:56.141 common/cpt: not in enabled drivers build config 00:02:56.141 common/dpaax: not in enabled drivers build config 00:02:56.141 common/iavf: not in enabled drivers build config 00:02:56.141 common/idpf: not in enabled drivers build config 00:02:56.141 common/ionic: not in enabled drivers build config 00:02:56.141 common/mvep: not in enabled drivers build config 00:02:56.141 common/octeontx: not in enabled drivers build config 00:02:56.141 bus/auxiliary: not in enabled drivers build config 00:02:56.141 bus/cdx: not in enabled drivers build config 00:02:56.141 bus/dpaa: not in enabled drivers build config 00:02:56.141 bus/fslmc: not in enabled drivers build config 00:02:56.141 bus/ifpga: not in enabled drivers build config 00:02:56.141 bus/platform: not in enabled drivers build config 00:02:56.141 bus/uacce: not in enabled drivers build config 00:02:56.141 bus/vmbus: not in enabled drivers build config 00:02:56.141 common/cnxk: not in enabled drivers build config 00:02:56.141 common/mlx5: not in enabled drivers build config 00:02:56.141 common/nfp: not in enabled drivers build config 00:02:56.141 common/nitrox: not in enabled drivers build config 00:02:56.141 common/qat: not in enabled drivers build config 00:02:56.141 common/sfc_efx: not in enabled drivers build config 00:02:56.141 mempool/bucket: not in enabled drivers build config 00:02:56.141 mempool/cnxk: not in enabled drivers build config 00:02:56.141 mempool/dpaa: not in enabled drivers build config 00:02:56.141 mempool/dpaa2: not in enabled drivers build config 00:02:56.141 mempool/octeontx: not in enabled drivers build config 00:02:56.141 mempool/stack: not in enabled drivers build config 00:02:56.141 dma/cnxk: not in enabled drivers build config 00:02:56.141 dma/dpaa: not in enabled drivers build config 00:02:56.141 dma/dpaa2: not in enabled drivers build config 00:02:56.141 dma/hisilicon: not in enabled drivers build config 00:02:56.141 dma/idxd: not in enabled drivers build config 00:02:56.141 dma/ioat: not in enabled drivers build config 00:02:56.141 dma/skeleton: not in enabled drivers build config 00:02:56.141 net/af_packet: not in enabled drivers build config 00:02:56.141 net/af_xdp: not in enabled drivers build config 00:02:56.141 net/ark: not in enabled drivers build config 00:02:56.141 net/atlantic: not in enabled drivers build config 00:02:56.141 net/avp: not in enabled drivers build config 00:02:56.141 net/axgbe: not in enabled drivers build config 00:02:56.141 net/bnx2x: not in enabled drivers build config 00:02:56.141 net/bnxt: not in enabled drivers build config 00:02:56.141 net/bonding: not in enabled drivers build config 00:02:56.141 net/cnxk: not in enabled drivers build config 00:02:56.141 net/cpfl: not in enabled drivers build config 00:02:56.141 net/cxgbe: not in enabled drivers build config 00:02:56.141 net/dpaa: not in enabled drivers build config 00:02:56.141 net/dpaa2: not in enabled drivers build config 00:02:56.141 net/e1000: not in enabled drivers build config 00:02:56.141 net/ena: not in enabled drivers build config 00:02:56.141 net/enetc: not in enabled drivers build config 00:02:56.141 net/enetfec: not in enabled drivers build config 00:02:56.141 net/enic: not in enabled drivers build config 00:02:56.141 net/failsafe: not in enabled drivers build config 00:02:56.141 net/fm10k: not in enabled drivers build config 00:02:56.141 net/gve: not in enabled drivers build config 00:02:56.141 net/hinic: not in enabled drivers build config 00:02:56.141 net/hns3: not in enabled drivers build config 00:02:56.141 net/i40e: not in enabled drivers build config 00:02:56.141 net/iavf: not in enabled drivers build config 00:02:56.141 net/ice: not in enabled drivers build config 00:02:56.141 net/idpf: not in enabled drivers build config 00:02:56.141 net/igc: not in enabled drivers build config 00:02:56.141 net/ionic: not in enabled drivers build config 00:02:56.141 net/ipn3ke: not in enabled drivers build config 00:02:56.141 net/ixgbe: not in enabled drivers build config 00:02:56.141 net/mana: not in enabled drivers build config 00:02:56.141 net/memif: not in enabled drivers build config 00:02:56.141 net/mlx4: not in enabled drivers build config 00:02:56.141 net/mlx5: not in enabled drivers build config 00:02:56.141 net/mvneta: not in enabled drivers build config 00:02:56.141 net/mvpp2: not in enabled drivers build config 00:02:56.141 net/netvsc: not in enabled drivers build config 00:02:56.141 net/nfb: not in enabled drivers build config 00:02:56.141 net/nfp: not in enabled drivers build config 00:02:56.141 net/ngbe: not in enabled drivers build config 00:02:56.141 net/null: not in enabled drivers build config 00:02:56.141 net/octeontx: not in enabled drivers build config 00:02:56.141 net/octeon_ep: not in enabled drivers build config 00:02:56.141 net/pcap: not in enabled drivers build config 00:02:56.141 net/pfe: not in enabled drivers build config 00:02:56.141 net/qede: not in enabled drivers build config 00:02:56.141 net/ring: not in enabled drivers build config 00:02:56.141 net/sfc: not in enabled drivers build config 00:02:56.141 net/softnic: not in enabled drivers build config 00:02:56.141 net/tap: not in enabled drivers build config 00:02:56.141 net/thunderx: not in enabled drivers build config 00:02:56.141 net/txgbe: not in enabled drivers build config 00:02:56.141 net/vdev_netvsc: not in enabled drivers build config 00:02:56.141 net/vhost: not in enabled drivers build config 00:02:56.141 net/virtio: not in enabled drivers build config 00:02:56.141 net/vmxnet3: not in enabled drivers build config 00:02:56.141 raw/*: missing internal dependency, "rawdev" 00:02:56.141 crypto/armv8: not in enabled drivers build config 00:02:56.141 crypto/bcmfs: not in enabled drivers build config 00:02:56.141 crypto/caam_jr: not in enabled drivers build config 00:02:56.141 crypto/ccp: not in enabled drivers build config 00:02:56.141 crypto/cnxk: not in enabled drivers build config 00:02:56.141 crypto/dpaa_sec: not in enabled drivers build config 00:02:56.141 crypto/dpaa2_sec: not in enabled drivers build config 00:02:56.141 crypto/ipsec_mb: not in enabled drivers build config 00:02:56.141 crypto/mlx5: not in enabled drivers build config 00:02:56.141 crypto/mvsam: not in enabled drivers build config 00:02:56.141 crypto/nitrox: not in enabled drivers build config 00:02:56.141 crypto/null: not in enabled drivers build config 00:02:56.141 crypto/octeontx: not in enabled drivers build config 00:02:56.141 crypto/openssl: not in enabled drivers build config 00:02:56.141 crypto/scheduler: not in enabled drivers build config 00:02:56.141 crypto/uadk: not in enabled drivers build config 00:02:56.141 crypto/virtio: not in enabled drivers build config 00:02:56.141 compress/isal: not in enabled drivers build config 00:02:56.141 compress/mlx5: not in enabled drivers build config 00:02:56.141 compress/nitrox: not in enabled drivers build config 00:02:56.141 compress/octeontx: not in enabled drivers build config 00:02:56.141 compress/zlib: not in enabled drivers build config 00:02:56.141 regex/*: missing internal dependency, "regexdev" 00:02:56.141 ml/*: missing internal dependency, "mldev" 00:02:56.141 vdpa/ifc: not in enabled drivers build config 00:02:56.141 vdpa/mlx5: not in enabled drivers build config 00:02:56.141 vdpa/nfp: not in enabled drivers build config 00:02:56.141 vdpa/sfc: not in enabled drivers build config 00:02:56.141 event/*: missing internal dependency, "eventdev" 00:02:56.141 baseband/*: missing internal dependency, "bbdev" 00:02:56.141 gpu/*: missing internal dependency, "gpudev" 00:02:56.141 00:02:56.141 00:02:56.400 Build targets in project: 85 00:02:56.400 00:02:56.400 DPDK 24.03.0 00:02:56.400 00:02:56.400 User defined options 00:02:56.400 buildtype : debug 00:02:56.400 default_library : shared 00:02:56.400 libdir : lib 00:02:56.400 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:56.400 b_sanitize : address 00:02:56.400 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:56.400 c_link_args : 00:02:56.400 cpu_instruction_set: native 00:02:56.400 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:56.400 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:56.400 enable_docs : false 00:02:56.400 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:56.400 enable_kmods : false 00:02:56.400 max_lcores : 128 00:02:56.400 tests : false 00:02:56.400 00:02:56.400 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:56.965 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:56.965 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:56.965 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:57.223 [3/268] Linking static target lib/librte_kvargs.a 00:02:57.223 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:57.223 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:57.223 [6/268] Linking static target lib/librte_log.a 00:02:57.481 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:57.481 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.481 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:57.481 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:57.738 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:57.738 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:57.738 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:57.738 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:57.738 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:57.738 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:57.738 [17/268] Linking static target lib/librte_telemetry.a 00:02:57.738 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:57.996 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:57.996 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.254 [21/268] Linking target lib/librte_log.so.24.1 00:02:58.254 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:58.254 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:58.254 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:58.254 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:58.254 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:58.254 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:58.254 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:58.511 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:58.511 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:58.511 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:58.511 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:58.511 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.511 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:58.770 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:58.770 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:58.770 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:58.770 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:58.770 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:58.770 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:59.028 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:59.028 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:59.028 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:59.028 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:59.028 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:59.028 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:59.287 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:59.287 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:59.287 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:59.546 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:59.546 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:59.546 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:59.546 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:59.546 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:59.546 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:59.546 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:59.546 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:59.546 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:59.803 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:59.803 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:59.803 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:59.803 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:00.061 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:00.061 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:00.061 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:00.061 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:00.061 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:00.322 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:00.322 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:00.322 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:00.322 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:00.322 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:00.583 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:00.583 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:00.583 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:00.583 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:00.583 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:00.583 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:00.583 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:00.843 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:00.843 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:00.843 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:01.102 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:01.102 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:01.102 [85/268] Linking static target lib/librte_ring.a 00:03:01.102 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:01.102 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:01.102 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:01.102 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:01.102 [90/268] Linking static target lib/librte_eal.a 00:03:01.102 [91/268] Linking static target lib/librte_rcu.a 00:03:01.102 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:01.102 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:01.361 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:01.361 [95/268] Linking static target lib/librte_mempool.a 00:03:01.361 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:01.619 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.619 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.619 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:01.619 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:01.619 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:01.619 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:01.877 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:01.877 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:02.136 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:02.136 [106/268] Linking static target lib/librte_mbuf.a 00:03:02.136 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:02.136 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:02.136 [109/268] Linking static target lib/librte_meter.a 00:03:02.136 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:02.136 [111/268] Linking static target lib/librte_net.a 00:03:02.136 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:02.395 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:02.395 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:02.395 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.653 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.653 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:02.653 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.912 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:02.912 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:02.912 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:03.172 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:03.172 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.172 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:03.172 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:03.431 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:03.431 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:03.431 [128/268] Linking static target lib/librte_pci.a 00:03:03.431 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:03.431 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:03.431 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:03.431 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:03.431 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:03.431 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:03.431 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:03.689 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:03.689 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.689 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:03.689 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:03.689 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:03.689 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:03.689 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:03.689 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:03.689 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:03.689 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:03.977 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:03.977 [147/268] Linking static target lib/librte_cmdline.a 00:03:03.977 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:04.235 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:04.235 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:04.235 [151/268] Linking static target lib/librte_timer.a 00:03:04.235 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:04.235 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:04.494 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:04.752 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:04.752 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:04.752 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:04.752 [158/268] Linking static target lib/librte_ethdev.a 00:03:04.752 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.752 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.752 [161/268] Linking static target lib/librte_compressdev.a 00:03:05.011 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.011 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:05.269 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:05.269 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:05.269 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:05.270 [167/268] Linking static target lib/librte_hash.a 00:03:05.270 [168/268] Linking static target lib/librte_dmadev.a 00:03:05.270 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:05.270 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:05.526 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:05.526 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:05.784 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.784 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:05.784 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.784 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:06.043 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:06.043 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:06.043 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:06.043 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:06.043 [181/268] Linking static target lib/librte_cryptodev.a 00:03:06.043 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.301 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:06.301 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:06.301 [185/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.301 [186/268] Linking static target lib/librte_power.a 00:03:06.559 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:06.559 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:06.559 [189/268] Linking static target lib/librte_reorder.a 00:03:06.818 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:06.818 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:06.818 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:06.818 [193/268] Linking static target lib/librte_security.a 00:03:07.384 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.384 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:07.643 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.643 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:07.643 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.643 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:07.901 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:07.901 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:08.160 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:08.160 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:08.160 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:08.419 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:08.419 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:08.419 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:08.419 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:08.678 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.678 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:08.678 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:08.678 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:08.937 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:08.937 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:08.937 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:08.937 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:08.937 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:08.937 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:08.937 [219/268] Linking static target drivers/librte_bus_vdev.a 00:03:08.937 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:08.937 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:09.196 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:09.196 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.196 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:09.196 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:09.196 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:09.454 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.713 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:13.905 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:13.905 [230/268] Linking static target lib/librte_vhost.a 00:03:13.905 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.905 [232/268] Linking target lib/librte_eal.so.24.1 00:03:13.905 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:13.905 [234/268] Linking target lib/librte_meter.so.24.1 00:03:13.905 [235/268] Linking target lib/librte_timer.so.24.1 00:03:13.905 [236/268] Linking target lib/librte_ring.so.24.1 00:03:13.905 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:13.905 [238/268] Linking target lib/librte_pci.so.24.1 00:03:13.905 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:13.905 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:13.905 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:13.905 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:13.905 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:13.905 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:13.905 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:13.905 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:13.905 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:13.905 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.165 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:14.165 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:14.165 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:14.165 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:14.423 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:14.423 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:14.423 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:14.423 [256/268] Linking target lib/librte_net.so.24.1 00:03:14.423 [257/268] Linking target lib/librte_reorder.so.24.1 00:03:14.423 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:14.423 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:14.423 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:14.681 [261/268] Linking target lib/librte_security.so.24.1 00:03:14.682 [262/268] Linking target lib/librte_hash.so.24.1 00:03:14.682 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:14.682 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:14.682 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:14.682 [266/268] Linking target lib/librte_power.so.24.1 00:03:15.247 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.562 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:15.562 INFO: autodetecting backend as ninja 00:03:15.562 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:30.473 CC lib/ut_mock/mock.o 00:03:30.473 CC lib/log/log.o 00:03:30.473 CC lib/log/log_deprecated.o 00:03:30.473 CC lib/log/log_flags.o 00:03:30.473 CC lib/ut/ut.o 00:03:30.473 LIB libspdk_ut_mock.a 00:03:30.473 LIB libspdk_log.a 00:03:30.473 SO libspdk_ut_mock.so.6.0 00:03:30.473 SO libspdk_log.so.7.0 00:03:30.473 LIB libspdk_ut.a 00:03:30.473 SYMLINK libspdk_ut_mock.so 00:03:30.473 SO libspdk_ut.so.2.0 00:03:30.473 SYMLINK libspdk_log.so 00:03:30.473 SYMLINK libspdk_ut.so 00:03:30.473 CC lib/dma/dma.o 00:03:30.473 CXX lib/trace_parser/trace.o 00:03:30.473 CC lib/util/base64.o 00:03:30.473 CC lib/util/bit_array.o 00:03:30.473 CC lib/util/crc16.o 00:03:30.473 CC lib/util/cpuset.o 00:03:30.473 CC lib/util/crc32c.o 00:03:30.473 CC lib/util/crc32.o 00:03:30.473 CC lib/ioat/ioat.o 00:03:30.473 CC lib/vfio_user/host/vfio_user_pci.o 00:03:30.731 CC lib/util/crc32_ieee.o 00:03:30.731 CC lib/util/crc64.o 00:03:30.731 CC lib/vfio_user/host/vfio_user.o 00:03:30.731 CC lib/util/dif.o 00:03:30.731 LIB libspdk_dma.a 00:03:30.731 CC lib/util/fd.o 00:03:30.731 SO libspdk_dma.so.5.0 00:03:30.731 CC lib/util/fd_group.o 00:03:30.731 CC lib/util/file.o 00:03:30.731 CC lib/util/hexlify.o 00:03:30.731 SYMLINK libspdk_dma.so 00:03:30.731 CC lib/util/iov.o 00:03:30.731 LIB libspdk_ioat.a 00:03:30.731 SO libspdk_ioat.so.7.0 00:03:30.731 CC lib/util/math.o 00:03:30.731 CC lib/util/net.o 00:03:30.731 SYMLINK libspdk_ioat.so 00:03:30.731 CC lib/util/pipe.o 00:03:30.991 CC lib/util/strerror_tls.o 00:03:30.991 LIB libspdk_vfio_user.a 00:03:30.991 CC lib/util/string.o 00:03:30.991 SO libspdk_vfio_user.so.5.0 00:03:30.991 CC lib/util/uuid.o 00:03:30.991 CC lib/util/xor.o 00:03:30.991 CC lib/util/zipf.o 00:03:30.991 SYMLINK libspdk_vfio_user.so 00:03:30.991 CC lib/util/md5.o 00:03:31.249 LIB libspdk_util.a 00:03:31.508 LIB libspdk_trace_parser.a 00:03:31.508 SO libspdk_util.so.10.0 00:03:31.508 SO libspdk_trace_parser.so.6.0 00:03:31.508 SYMLINK libspdk_trace_parser.so 00:03:31.508 SYMLINK libspdk_util.so 00:03:31.767 CC lib/rdma_provider/common.o 00:03:31.767 CC lib/rdma_utils/rdma_utils.o 00:03:31.767 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:31.767 CC lib/json/json_util.o 00:03:31.767 CC lib/json/json_parse.o 00:03:31.767 CC lib/env_dpdk/env.o 00:03:31.767 CC lib/json/json_write.o 00:03:31.767 CC lib/idxd/idxd.o 00:03:31.767 CC lib/vmd/vmd.o 00:03:31.767 CC lib/conf/conf.o 00:03:32.026 CC lib/idxd/idxd_user.o 00:03:32.026 LIB libspdk_rdma_provider.a 00:03:32.026 SO libspdk_rdma_provider.so.6.0 00:03:32.026 LIB libspdk_conf.a 00:03:32.026 CC lib/idxd/idxd_kernel.o 00:03:32.026 CC lib/env_dpdk/memory.o 00:03:32.026 SO libspdk_conf.so.6.0 00:03:32.026 LIB libspdk_rdma_utils.a 00:03:32.026 SYMLINK libspdk_rdma_provider.so 00:03:32.026 CC lib/vmd/led.o 00:03:32.026 SO libspdk_rdma_utils.so.1.0 00:03:32.026 LIB libspdk_json.a 00:03:32.026 SYMLINK libspdk_conf.so 00:03:32.026 CC lib/env_dpdk/pci.o 00:03:32.026 SO libspdk_json.so.6.0 00:03:32.026 SYMLINK libspdk_rdma_utils.so 00:03:32.026 CC lib/env_dpdk/init.o 00:03:32.285 CC lib/env_dpdk/threads.o 00:03:32.285 SYMLINK libspdk_json.so 00:03:32.286 CC lib/env_dpdk/pci_ioat.o 00:03:32.286 CC lib/env_dpdk/pci_virtio.o 00:03:32.286 CC lib/env_dpdk/pci_vmd.o 00:03:32.286 CC lib/env_dpdk/pci_idxd.o 00:03:32.286 CC lib/env_dpdk/pci_event.o 00:03:32.286 CC lib/env_dpdk/sigbus_handler.o 00:03:32.286 CC lib/env_dpdk/pci_dpdk.o 00:03:32.544 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:32.544 LIB libspdk_idxd.a 00:03:32.544 SO libspdk_idxd.so.12.1 00:03:32.544 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:32.544 LIB libspdk_vmd.a 00:03:32.544 SO libspdk_vmd.so.6.0 00:03:32.544 SYMLINK libspdk_idxd.so 00:03:32.544 SYMLINK libspdk_vmd.so 00:03:32.803 CC lib/jsonrpc/jsonrpc_server.o 00:03:32.803 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:32.803 CC lib/jsonrpc/jsonrpc_client.o 00:03:32.803 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:33.062 LIB libspdk_jsonrpc.a 00:03:33.062 SO libspdk_jsonrpc.so.6.0 00:03:33.062 SYMLINK libspdk_jsonrpc.so 00:03:33.320 LIB libspdk_env_dpdk.a 00:03:33.578 SO libspdk_env_dpdk.so.15.0 00:03:33.578 CC lib/rpc/rpc.o 00:03:33.578 SYMLINK libspdk_env_dpdk.so 00:03:33.836 LIB libspdk_rpc.a 00:03:33.836 SO libspdk_rpc.so.6.0 00:03:33.836 SYMLINK libspdk_rpc.so 00:03:34.404 CC lib/notify/notify.o 00:03:34.404 CC lib/notify/notify_rpc.o 00:03:34.404 CC lib/trace/trace.o 00:03:34.404 CC lib/trace/trace_flags.o 00:03:34.404 CC lib/trace/trace_rpc.o 00:03:34.404 CC lib/keyring/keyring.o 00:03:34.404 CC lib/keyring/keyring_rpc.o 00:03:34.404 LIB libspdk_notify.a 00:03:34.404 SO libspdk_notify.so.6.0 00:03:34.666 LIB libspdk_trace.a 00:03:34.666 LIB libspdk_keyring.a 00:03:34.666 SYMLINK libspdk_notify.so 00:03:34.666 SO libspdk_trace.so.11.0 00:03:34.666 SO libspdk_keyring.so.2.0 00:03:34.666 SYMLINK libspdk_trace.so 00:03:34.666 SYMLINK libspdk_keyring.so 00:03:34.927 CC lib/thread/thread.o 00:03:34.927 CC lib/thread/iobuf.o 00:03:35.214 CC lib/sock/sock.o 00:03:35.214 CC lib/sock/sock_rpc.o 00:03:35.499 LIB libspdk_sock.a 00:03:35.499 SO libspdk_sock.so.10.0 00:03:35.759 SYMLINK libspdk_sock.so 00:03:36.017 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:36.017 CC lib/nvme/nvme_ctrlr.o 00:03:36.017 CC lib/nvme/nvme_fabric.o 00:03:36.017 CC lib/nvme/nvme_ns_cmd.o 00:03:36.017 CC lib/nvme/nvme_pcie_common.o 00:03:36.017 CC lib/nvme/nvme_ns.o 00:03:36.017 CC lib/nvme/nvme_pcie.o 00:03:36.017 CC lib/nvme/nvme_qpair.o 00:03:36.017 CC lib/nvme/nvme.o 00:03:36.584 LIB libspdk_thread.a 00:03:36.585 CC lib/nvme/nvme_quirks.o 00:03:36.585 CC lib/nvme/nvme_transport.o 00:03:36.585 SO libspdk_thread.so.10.2 00:03:36.843 CC lib/nvme/nvme_discovery.o 00:03:36.843 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:36.843 SYMLINK libspdk_thread.so 00:03:36.843 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:36.843 CC lib/nvme/nvme_tcp.o 00:03:36.843 CC lib/nvme/nvme_opal.o 00:03:36.843 CC lib/nvme/nvme_io_msg.o 00:03:37.101 CC lib/nvme/nvme_poll_group.o 00:03:37.101 CC lib/nvme/nvme_zns.o 00:03:37.360 CC lib/nvme/nvme_stubs.o 00:03:37.360 CC lib/nvme/nvme_auth.o 00:03:37.360 CC lib/nvme/nvme_cuse.o 00:03:37.360 CC lib/nvme/nvme_rdma.o 00:03:37.619 CC lib/accel/accel.o 00:03:37.619 CC lib/blob/blobstore.o 00:03:37.619 CC lib/accel/accel_rpc.o 00:03:37.619 CC lib/accel/accel_sw.o 00:03:37.879 CC lib/init/json_config.o 00:03:38.137 CC lib/init/subsystem.o 00:03:38.137 CC lib/virtio/virtio.o 00:03:38.137 CC lib/init/subsystem_rpc.o 00:03:38.137 CC lib/init/rpc.o 00:03:38.137 CC lib/blob/request.o 00:03:38.396 CC lib/virtio/virtio_vhost_user.o 00:03:38.396 LIB libspdk_init.a 00:03:38.396 CC lib/virtio/virtio_vfio_user.o 00:03:38.396 CC lib/fsdev/fsdev.o 00:03:38.396 SO libspdk_init.so.6.0 00:03:38.396 CC lib/blob/zeroes.o 00:03:38.396 SYMLINK libspdk_init.so 00:03:38.396 CC lib/virtio/virtio_pci.o 00:03:38.654 CC lib/blob/blob_bs_dev.o 00:03:38.654 CC lib/fsdev/fsdev_io.o 00:03:38.654 CC lib/fsdev/fsdev_rpc.o 00:03:38.654 CC lib/event/app.o 00:03:38.654 CC lib/event/reactor.o 00:03:38.654 CC lib/event/log_rpc.o 00:03:38.654 LIB libspdk_nvme.a 00:03:38.913 LIB libspdk_virtio.a 00:03:38.913 CC lib/event/app_rpc.o 00:03:38.913 SO libspdk_virtio.so.7.0 00:03:38.913 LIB libspdk_accel.a 00:03:38.913 SO libspdk_accel.so.16.0 00:03:38.913 CC lib/event/scheduler_static.o 00:03:38.913 SYMLINK libspdk_virtio.so 00:03:38.913 SYMLINK libspdk_accel.so 00:03:38.913 SO libspdk_nvme.so.14.0 00:03:39.171 LIB libspdk_fsdev.a 00:03:39.171 SO libspdk_fsdev.so.1.0 00:03:39.171 LIB libspdk_event.a 00:03:39.171 SYMLINK libspdk_fsdev.so 00:03:39.171 SO libspdk_event.so.15.0 00:03:39.171 SYMLINK libspdk_nvme.so 00:03:39.430 CC lib/bdev/bdev.o 00:03:39.430 CC lib/bdev/bdev_rpc.o 00:03:39.430 CC lib/bdev/part.o 00:03:39.430 CC lib/bdev/bdev_zone.o 00:03:39.430 CC lib/bdev/scsi_nvme.o 00:03:39.430 SYMLINK libspdk_event.so 00:03:39.430 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:40.367 LIB libspdk_fuse_dispatcher.a 00:03:40.367 SO libspdk_fuse_dispatcher.so.1.0 00:03:40.367 SYMLINK libspdk_fuse_dispatcher.so 00:03:41.301 LIB libspdk_blob.a 00:03:41.301 SO libspdk_blob.so.11.0 00:03:41.560 SYMLINK libspdk_blob.so 00:03:41.817 CC lib/lvol/lvol.o 00:03:41.817 CC lib/blobfs/blobfs.o 00:03:41.817 CC lib/blobfs/tree.o 00:03:42.383 LIB libspdk_bdev.a 00:03:42.642 SO libspdk_bdev.so.17.0 00:03:42.642 SYMLINK libspdk_bdev.so 00:03:42.642 LIB libspdk_blobfs.a 00:03:42.901 SO libspdk_blobfs.so.10.0 00:03:42.901 LIB libspdk_lvol.a 00:03:42.901 SYMLINK libspdk_blobfs.so 00:03:42.901 SO libspdk_lvol.so.10.0 00:03:42.901 CC lib/nbd/nbd.o 00:03:42.901 CC lib/nvmf/ctrlr_discovery.o 00:03:42.901 CC lib/nvmf/ctrlr.o 00:03:42.901 CC lib/nbd/nbd_rpc.o 00:03:42.901 CC lib/nvmf/ctrlr_bdev.o 00:03:42.901 SYMLINK libspdk_lvol.so 00:03:42.901 CC lib/ftl/ftl_core.o 00:03:42.901 CC lib/ftl/ftl_init.o 00:03:42.901 CC lib/ublk/ublk.o 00:03:42.901 CC lib/ftl/ftl_layout.o 00:03:42.901 CC lib/scsi/dev.o 00:03:43.159 CC lib/scsi/lun.o 00:03:43.159 CC lib/scsi/port.o 00:03:43.159 CC lib/nvmf/subsystem.o 00:03:43.159 CC lib/nvmf/nvmf.o 00:03:43.417 CC lib/nvmf/nvmf_rpc.o 00:03:43.417 CC lib/ftl/ftl_debug.o 00:03:43.417 LIB libspdk_nbd.a 00:03:43.417 CC lib/scsi/scsi.o 00:03:43.417 SO libspdk_nbd.so.7.0 00:03:43.417 SYMLINK libspdk_nbd.so 00:03:43.417 CC lib/scsi/scsi_bdev.o 00:03:43.417 CC lib/scsi/scsi_pr.o 00:03:43.674 CC lib/ftl/ftl_io.o 00:03:43.674 CC lib/ublk/ublk_rpc.o 00:03:43.674 CC lib/nvmf/transport.o 00:03:43.674 LIB libspdk_ublk.a 00:03:43.674 SO libspdk_ublk.so.3.0 00:03:43.674 CC lib/nvmf/tcp.o 00:03:43.939 CC lib/ftl/ftl_sb.o 00:03:43.939 SYMLINK libspdk_ublk.so 00:03:43.939 CC lib/scsi/scsi_rpc.o 00:03:43.939 CC lib/nvmf/stubs.o 00:03:43.939 CC lib/ftl/ftl_l2p.o 00:03:43.939 CC lib/nvmf/mdns_server.o 00:03:44.223 CC lib/scsi/task.o 00:03:44.223 CC lib/ftl/ftl_l2p_flat.o 00:03:44.223 CC lib/nvmf/rdma.o 00:03:44.223 LIB libspdk_scsi.a 00:03:44.223 CC lib/nvmf/auth.o 00:03:44.223 CC lib/ftl/ftl_nv_cache.o 00:03:44.223 SO libspdk_scsi.so.9.0 00:03:44.480 CC lib/ftl/ftl_band.o 00:03:44.480 SYMLINK libspdk_scsi.so 00:03:44.480 CC lib/ftl/ftl_band_ops.o 00:03:44.480 CC lib/ftl/ftl_writer.o 00:03:44.737 CC lib/iscsi/conn.o 00:03:44.737 CC lib/iscsi/init_grp.o 00:03:44.737 CC lib/vhost/vhost.o 00:03:44.737 CC lib/ftl/ftl_rq.o 00:03:44.737 CC lib/iscsi/iscsi.o 00:03:44.995 CC lib/iscsi/param.o 00:03:44.995 CC lib/iscsi/portal_grp.o 00:03:44.995 CC lib/ftl/ftl_reloc.o 00:03:45.253 CC lib/ftl/ftl_l2p_cache.o 00:03:45.253 CC lib/ftl/ftl_p2l.o 00:03:45.253 CC lib/ftl/ftl_p2l_log.o 00:03:45.253 CC lib/ftl/mngt/ftl_mngt.o 00:03:45.511 CC lib/vhost/vhost_rpc.o 00:03:45.511 CC lib/vhost/vhost_scsi.o 00:03:45.511 CC lib/vhost/vhost_blk.o 00:03:45.511 CC lib/vhost/rte_vhost_user.o 00:03:45.769 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:45.769 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:45.769 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:45.769 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:45.769 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:45.769 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:45.769 CC lib/iscsi/tgt_node.o 00:03:46.027 CC lib/iscsi/iscsi_subsystem.o 00:03:46.027 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:46.027 CC lib/iscsi/iscsi_rpc.o 00:03:46.027 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:46.285 CC lib/iscsi/task.o 00:03:46.285 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:46.285 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:46.544 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:46.544 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:46.544 CC lib/ftl/utils/ftl_conf.o 00:03:46.544 CC lib/ftl/utils/ftl_md.o 00:03:46.544 CC lib/ftl/utils/ftl_mempool.o 00:03:46.544 CC lib/ftl/utils/ftl_bitmap.o 00:03:46.544 CC lib/ftl/utils/ftl_property.o 00:03:46.544 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:46.544 LIB libspdk_iscsi.a 00:03:46.802 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:46.802 LIB libspdk_nvmf.a 00:03:46.802 SO libspdk_iscsi.so.8.0 00:03:46.802 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:46.802 LIB libspdk_vhost.a 00:03:46.802 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:46.802 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:46.802 SO libspdk_vhost.so.8.0 00:03:46.802 SO libspdk_nvmf.so.19.0 00:03:46.802 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:46.802 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:46.802 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:47.060 SYMLINK libspdk_iscsi.so 00:03:47.060 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:47.060 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:47.060 SYMLINK libspdk_vhost.so 00:03:47.060 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:47.060 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:47.060 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:47.060 CC lib/ftl/base/ftl_base_dev.o 00:03:47.060 CC lib/ftl/base/ftl_base_bdev.o 00:03:47.060 CC lib/ftl/ftl_trace.o 00:03:47.060 SYMLINK libspdk_nvmf.so 00:03:47.318 LIB libspdk_ftl.a 00:03:47.629 SO libspdk_ftl.so.9.0 00:03:47.886 SYMLINK libspdk_ftl.so 00:03:48.453 CC module/env_dpdk/env_dpdk_rpc.o 00:03:48.453 CC module/keyring/linux/keyring.o 00:03:48.453 CC module/fsdev/aio/fsdev_aio.o 00:03:48.453 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:48.453 CC module/keyring/file/keyring.o 00:03:48.453 CC module/blob/bdev/blob_bdev.o 00:03:48.453 CC module/scheduler/gscheduler/gscheduler.o 00:03:48.453 CC module/sock/posix/posix.o 00:03:48.453 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:48.453 CC module/accel/error/accel_error.o 00:03:48.453 LIB libspdk_env_dpdk_rpc.a 00:03:48.453 SO libspdk_env_dpdk_rpc.so.6.0 00:03:48.710 CC module/keyring/linux/keyring_rpc.o 00:03:48.710 CC module/keyring/file/keyring_rpc.o 00:03:48.710 SYMLINK libspdk_env_dpdk_rpc.so 00:03:48.710 LIB libspdk_scheduler_gscheduler.a 00:03:48.710 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:48.710 LIB libspdk_scheduler_dpdk_governor.a 00:03:48.710 SO libspdk_scheduler_gscheduler.so.4.0 00:03:48.710 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:48.710 CC module/accel/error/accel_error_rpc.o 00:03:48.710 LIB libspdk_scheduler_dynamic.a 00:03:48.710 SO libspdk_scheduler_dynamic.so.4.0 00:03:48.710 SYMLINK libspdk_scheduler_gscheduler.so 00:03:48.710 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:48.710 LIB libspdk_keyring_linux.a 00:03:48.710 SYMLINK libspdk_scheduler_dynamic.so 00:03:48.710 LIB libspdk_blob_bdev.a 00:03:48.710 LIB libspdk_keyring_file.a 00:03:48.710 CC module/fsdev/aio/linux_aio_mgr.o 00:03:48.710 SO libspdk_keyring_linux.so.1.0 00:03:48.710 SO libspdk_blob_bdev.so.11.0 00:03:48.710 SO libspdk_keyring_file.so.2.0 00:03:48.710 LIB libspdk_accel_error.a 00:03:48.969 SYMLINK libspdk_keyring_linux.so 00:03:48.969 SO libspdk_accel_error.so.2.0 00:03:48.969 SYMLINK libspdk_keyring_file.so 00:03:48.969 SYMLINK libspdk_blob_bdev.so 00:03:48.969 CC module/accel/ioat/accel_ioat.o 00:03:48.969 CC module/accel/ioat/accel_ioat_rpc.o 00:03:48.969 CC module/accel/dsa/accel_dsa.o 00:03:48.969 CC module/accel/iaa/accel_iaa.o 00:03:48.969 SYMLINK libspdk_accel_error.so 00:03:48.969 CC module/accel/dsa/accel_dsa_rpc.o 00:03:48.969 CC module/accel/iaa/accel_iaa_rpc.o 00:03:49.228 LIB libspdk_accel_ioat.a 00:03:49.228 LIB libspdk_accel_iaa.a 00:03:49.228 CC module/blobfs/bdev/blobfs_bdev.o 00:03:49.228 CC module/bdev/delay/vbdev_delay.o 00:03:49.228 SO libspdk_accel_ioat.so.6.0 00:03:49.228 SO libspdk_accel_iaa.so.3.0 00:03:49.228 LIB libspdk_fsdev_aio.a 00:03:49.228 SYMLINK libspdk_accel_ioat.so 00:03:49.228 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:49.228 CC module/bdev/gpt/gpt.o 00:03:49.228 CC module/bdev/error/vbdev_error.o 00:03:49.228 SO libspdk_fsdev_aio.so.1.0 00:03:49.228 SYMLINK libspdk_accel_iaa.so 00:03:49.228 LIB libspdk_accel_dsa.a 00:03:49.228 LIB libspdk_sock_posix.a 00:03:49.228 SO libspdk_accel_dsa.so.5.0 00:03:49.228 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:49.228 SO libspdk_sock_posix.so.6.0 00:03:49.228 SYMLINK libspdk_fsdev_aio.so 00:03:49.486 CC module/bdev/lvol/vbdev_lvol.o 00:03:49.486 SYMLINK libspdk_accel_dsa.so 00:03:49.486 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:49.486 LIB libspdk_blobfs_bdev.a 00:03:49.486 SYMLINK libspdk_sock_posix.so 00:03:49.486 CC module/bdev/gpt/vbdev_gpt.o 00:03:49.486 SO libspdk_blobfs_bdev.so.6.0 00:03:49.486 CC module/bdev/malloc/bdev_malloc.o 00:03:49.486 CC module/bdev/error/vbdev_error_rpc.o 00:03:49.486 SYMLINK libspdk_blobfs_bdev.so 00:03:49.486 CC module/bdev/null/bdev_null.o 00:03:49.486 LIB libspdk_bdev_delay.a 00:03:49.486 SO libspdk_bdev_delay.so.6.0 00:03:49.486 CC module/bdev/nvme/bdev_nvme.o 00:03:49.744 CC module/bdev/passthru/vbdev_passthru.o 00:03:49.744 LIB libspdk_bdev_error.a 00:03:49.744 SYMLINK libspdk_bdev_delay.so 00:03:49.744 CC module/bdev/raid/bdev_raid.o 00:03:49.744 SO libspdk_bdev_error.so.6.0 00:03:49.744 LIB libspdk_bdev_gpt.a 00:03:49.744 SO libspdk_bdev_gpt.so.6.0 00:03:49.744 SYMLINK libspdk_bdev_error.so 00:03:49.744 CC module/bdev/raid/bdev_raid_rpc.o 00:03:49.744 CC module/bdev/raid/bdev_raid_sb.o 00:03:49.744 SYMLINK libspdk_bdev_gpt.so 00:03:49.744 CC module/bdev/raid/raid0.o 00:03:49.744 CC module/bdev/null/bdev_null_rpc.o 00:03:49.744 CC module/bdev/split/vbdev_split.o 00:03:50.003 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:50.003 LIB libspdk_bdev_lvol.a 00:03:50.003 SO libspdk_bdev_lvol.so.6.0 00:03:50.003 LIB libspdk_bdev_null.a 00:03:50.003 CC module/bdev/split/vbdev_split_rpc.o 00:03:50.003 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:50.003 SO libspdk_bdev_null.so.6.0 00:03:50.003 SYMLINK libspdk_bdev_lvol.so 00:03:50.003 LIB libspdk_bdev_malloc.a 00:03:50.003 CC module/bdev/raid/raid1.o 00:03:50.003 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:50.003 CC module/bdev/nvme/nvme_rpc.o 00:03:50.003 SO libspdk_bdev_malloc.so.6.0 00:03:50.003 SYMLINK libspdk_bdev_null.so 00:03:50.003 CC module/bdev/nvme/bdev_mdns_client.o 00:03:50.003 CC module/bdev/nvme/vbdev_opal.o 00:03:50.261 SYMLINK libspdk_bdev_malloc.so 00:03:50.261 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:50.261 LIB libspdk_bdev_passthru.a 00:03:50.261 LIB libspdk_bdev_split.a 00:03:50.261 SO libspdk_bdev_split.so.6.0 00:03:50.261 SO libspdk_bdev_passthru.so.6.0 00:03:50.261 CC module/bdev/raid/concat.o 00:03:50.261 SYMLINK libspdk_bdev_split.so 00:03:50.519 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:50.519 SYMLINK libspdk_bdev_passthru.so 00:03:50.519 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:50.519 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:50.519 CC module/bdev/xnvme/bdev_xnvme.o 00:03:50.519 CC module/bdev/aio/bdev_aio.o 00:03:50.519 CC module/bdev/ftl/bdev_ftl.o 00:03:50.519 CC module/bdev/iscsi/bdev_iscsi.o 00:03:50.777 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:50.778 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:50.778 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:50.778 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:50.778 LIB libspdk_bdev_zone_block.a 00:03:50.778 CC module/bdev/aio/bdev_aio_rpc.o 00:03:51.035 LIB libspdk_bdev_raid.a 00:03:51.035 SO libspdk_bdev_zone_block.so.6.0 00:03:51.035 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:51.035 LIB libspdk_bdev_ftl.a 00:03:51.035 SYMLINK libspdk_bdev_zone_block.so 00:03:51.035 SO libspdk_bdev_raid.so.6.0 00:03:51.035 LIB libspdk_bdev_xnvme.a 00:03:51.035 SO libspdk_bdev_ftl.so.6.0 00:03:51.035 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:51.035 SO libspdk_bdev_xnvme.so.3.0 00:03:51.035 LIB libspdk_bdev_aio.a 00:03:51.035 SYMLINK libspdk_bdev_ftl.so 00:03:51.035 SYMLINK libspdk_bdev_raid.so 00:03:51.035 SO libspdk_bdev_aio.so.6.0 00:03:51.035 SYMLINK libspdk_bdev_xnvme.so 00:03:51.293 SYMLINK libspdk_bdev_aio.so 00:03:51.293 LIB libspdk_bdev_iscsi.a 00:03:51.293 SO libspdk_bdev_iscsi.so.6.0 00:03:51.293 SYMLINK libspdk_bdev_iscsi.so 00:03:51.293 LIB libspdk_bdev_virtio.a 00:03:51.293 SO libspdk_bdev_virtio.so.6.0 00:03:51.552 SYMLINK libspdk_bdev_virtio.so 00:03:52.489 LIB libspdk_bdev_nvme.a 00:03:52.489 SO libspdk_bdev_nvme.so.7.0 00:03:52.748 SYMLINK libspdk_bdev_nvme.so 00:03:53.314 CC module/event/subsystems/vmd/vmd.o 00:03:53.315 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:53.315 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:53.315 CC module/event/subsystems/iobuf/iobuf.o 00:03:53.315 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:53.315 CC module/event/subsystems/sock/sock.o 00:03:53.315 CC module/event/subsystems/keyring/keyring.o 00:03:53.315 CC module/event/subsystems/scheduler/scheduler.o 00:03:53.315 CC module/event/subsystems/fsdev/fsdev.o 00:03:53.315 LIB libspdk_event_vhost_blk.a 00:03:53.315 LIB libspdk_event_fsdev.a 00:03:53.315 LIB libspdk_event_vmd.a 00:03:53.315 LIB libspdk_event_scheduler.a 00:03:53.315 SO libspdk_event_vhost_blk.so.3.0 00:03:53.315 LIB libspdk_event_sock.a 00:03:53.315 LIB libspdk_event_keyring.a 00:03:53.315 LIB libspdk_event_iobuf.a 00:03:53.573 SO libspdk_event_fsdev.so.1.0 00:03:53.573 SO libspdk_event_scheduler.so.4.0 00:03:53.573 SO libspdk_event_sock.so.5.0 00:03:53.573 SO libspdk_event_vmd.so.6.0 00:03:53.573 SO libspdk_event_keyring.so.1.0 00:03:53.573 SO libspdk_event_iobuf.so.3.0 00:03:53.573 SYMLINK libspdk_event_vhost_blk.so 00:03:53.573 SYMLINK libspdk_event_scheduler.so 00:03:53.573 SYMLINK libspdk_event_sock.so 00:03:53.573 SYMLINK libspdk_event_fsdev.so 00:03:53.573 SYMLINK libspdk_event_keyring.so 00:03:53.573 SYMLINK libspdk_event_vmd.so 00:03:53.573 SYMLINK libspdk_event_iobuf.so 00:03:53.832 CC module/event/subsystems/accel/accel.o 00:03:54.090 LIB libspdk_event_accel.a 00:03:54.090 SO libspdk_event_accel.so.6.0 00:03:54.348 SYMLINK libspdk_event_accel.so 00:03:54.607 CC module/event/subsystems/bdev/bdev.o 00:03:54.864 LIB libspdk_event_bdev.a 00:03:54.864 SO libspdk_event_bdev.so.6.0 00:03:54.864 SYMLINK libspdk_event_bdev.so 00:03:55.432 CC module/event/subsystems/scsi/scsi.o 00:03:55.432 CC module/event/subsystems/nbd/nbd.o 00:03:55.432 CC module/event/subsystems/ublk/ublk.o 00:03:55.432 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:55.432 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:55.432 LIB libspdk_event_scsi.a 00:03:55.432 LIB libspdk_event_ublk.a 00:03:55.432 LIB libspdk_event_nbd.a 00:03:55.432 SO libspdk_event_scsi.so.6.0 00:03:55.432 SO libspdk_event_nbd.so.6.0 00:03:55.432 SO libspdk_event_ublk.so.3.0 00:03:55.432 SYMLINK libspdk_event_scsi.so 00:03:55.432 SYMLINK libspdk_event_ublk.so 00:03:55.692 SYMLINK libspdk_event_nbd.so 00:03:55.692 LIB libspdk_event_nvmf.a 00:03:55.692 SO libspdk_event_nvmf.so.6.0 00:03:55.692 SYMLINK libspdk_event_nvmf.so 00:03:55.955 CC module/event/subsystems/iscsi/iscsi.o 00:03:55.955 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:55.955 LIB libspdk_event_vhost_scsi.a 00:03:56.213 LIB libspdk_event_iscsi.a 00:03:56.214 SO libspdk_event_vhost_scsi.so.3.0 00:03:56.214 SO libspdk_event_iscsi.so.6.0 00:03:56.214 SYMLINK libspdk_event_vhost_scsi.so 00:03:56.214 SYMLINK libspdk_event_iscsi.so 00:03:56.472 SO libspdk.so.6.0 00:03:56.472 SYMLINK libspdk.so 00:03:56.730 CC test/rpc_client/rpc_client_test.o 00:03:56.730 CXX app/trace/trace.o 00:03:56.730 CC app/trace_record/trace_record.o 00:03:56.730 TEST_HEADER include/spdk/accel.h 00:03:56.731 TEST_HEADER include/spdk/accel_module.h 00:03:56.731 TEST_HEADER include/spdk/assert.h 00:03:56.731 TEST_HEADER include/spdk/barrier.h 00:03:56.731 TEST_HEADER include/spdk/base64.h 00:03:56.731 TEST_HEADER include/spdk/bdev.h 00:03:56.731 TEST_HEADER include/spdk/bdev_module.h 00:03:56.731 TEST_HEADER include/spdk/bdev_zone.h 00:03:56.731 TEST_HEADER include/spdk/bit_array.h 00:03:56.731 TEST_HEADER include/spdk/bit_pool.h 00:03:56.731 TEST_HEADER include/spdk/blob_bdev.h 00:03:56.731 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:56.731 TEST_HEADER include/spdk/blobfs.h 00:03:56.731 TEST_HEADER include/spdk/blob.h 00:03:56.731 TEST_HEADER include/spdk/conf.h 00:03:56.731 TEST_HEADER include/spdk/config.h 00:03:56.731 TEST_HEADER include/spdk/cpuset.h 00:03:56.731 TEST_HEADER include/spdk/crc16.h 00:03:56.731 TEST_HEADER include/spdk/crc32.h 00:03:56.731 CC app/nvmf_tgt/nvmf_main.o 00:03:56.731 TEST_HEADER include/spdk/crc64.h 00:03:56.731 TEST_HEADER include/spdk/dif.h 00:03:56.731 TEST_HEADER include/spdk/dma.h 00:03:56.731 TEST_HEADER include/spdk/endian.h 00:03:56.731 TEST_HEADER include/spdk/env_dpdk.h 00:03:56.731 TEST_HEADER include/spdk/env.h 00:03:56.731 TEST_HEADER include/spdk/event.h 00:03:56.731 TEST_HEADER include/spdk/fd_group.h 00:03:56.731 TEST_HEADER include/spdk/fd.h 00:03:56.731 TEST_HEADER include/spdk/file.h 00:03:56.731 TEST_HEADER include/spdk/fsdev.h 00:03:56.731 TEST_HEADER include/spdk/fsdev_module.h 00:03:56.731 TEST_HEADER include/spdk/ftl.h 00:03:56.731 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:56.731 TEST_HEADER include/spdk/gpt_spec.h 00:03:56.731 TEST_HEADER include/spdk/hexlify.h 00:03:56.731 TEST_HEADER include/spdk/histogram_data.h 00:03:56.731 TEST_HEADER include/spdk/idxd.h 00:03:56.731 TEST_HEADER include/spdk/idxd_spec.h 00:03:56.731 CC examples/util/zipf/zipf.o 00:03:56.731 TEST_HEADER include/spdk/init.h 00:03:56.731 TEST_HEADER include/spdk/ioat.h 00:03:56.731 CC test/thread/poller_perf/poller_perf.o 00:03:56.731 TEST_HEADER include/spdk/ioat_spec.h 00:03:56.731 TEST_HEADER include/spdk/iscsi_spec.h 00:03:56.731 TEST_HEADER include/spdk/json.h 00:03:56.731 TEST_HEADER include/spdk/jsonrpc.h 00:03:56.731 TEST_HEADER include/spdk/keyring.h 00:03:56.731 CC test/dma/test_dma/test_dma.o 00:03:56.731 TEST_HEADER include/spdk/keyring_module.h 00:03:56.731 TEST_HEADER include/spdk/likely.h 00:03:56.731 TEST_HEADER include/spdk/log.h 00:03:56.731 TEST_HEADER include/spdk/lvol.h 00:03:56.731 TEST_HEADER include/spdk/md5.h 00:03:56.731 TEST_HEADER include/spdk/memory.h 00:03:56.731 TEST_HEADER include/spdk/mmio.h 00:03:56.731 TEST_HEADER include/spdk/nbd.h 00:03:56.731 CC test/app/bdev_svc/bdev_svc.o 00:03:56.989 TEST_HEADER include/spdk/net.h 00:03:56.989 TEST_HEADER include/spdk/notify.h 00:03:56.989 TEST_HEADER include/spdk/nvme.h 00:03:56.989 TEST_HEADER include/spdk/nvme_intel.h 00:03:56.989 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:56.989 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:56.989 TEST_HEADER include/spdk/nvme_spec.h 00:03:56.989 TEST_HEADER include/spdk/nvme_zns.h 00:03:56.989 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:56.989 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:56.989 TEST_HEADER include/spdk/nvmf.h 00:03:56.989 TEST_HEADER include/spdk/nvmf_spec.h 00:03:56.989 TEST_HEADER include/spdk/nvmf_transport.h 00:03:56.989 TEST_HEADER include/spdk/opal.h 00:03:56.989 TEST_HEADER include/spdk/opal_spec.h 00:03:56.989 TEST_HEADER include/spdk/pci_ids.h 00:03:56.989 TEST_HEADER include/spdk/pipe.h 00:03:56.989 TEST_HEADER include/spdk/queue.h 00:03:56.989 TEST_HEADER include/spdk/reduce.h 00:03:56.989 TEST_HEADER include/spdk/rpc.h 00:03:56.989 TEST_HEADER include/spdk/scheduler.h 00:03:56.989 TEST_HEADER include/spdk/scsi.h 00:03:56.989 CC test/env/mem_callbacks/mem_callbacks.o 00:03:56.989 TEST_HEADER include/spdk/scsi_spec.h 00:03:56.989 TEST_HEADER include/spdk/sock.h 00:03:56.989 TEST_HEADER include/spdk/stdinc.h 00:03:56.989 TEST_HEADER include/spdk/string.h 00:03:56.989 LINK rpc_client_test 00:03:56.989 TEST_HEADER include/spdk/thread.h 00:03:56.989 TEST_HEADER include/spdk/trace.h 00:03:56.989 TEST_HEADER include/spdk/trace_parser.h 00:03:56.989 TEST_HEADER include/spdk/tree.h 00:03:56.989 TEST_HEADER include/spdk/ublk.h 00:03:56.989 TEST_HEADER include/spdk/util.h 00:03:56.989 TEST_HEADER include/spdk/uuid.h 00:03:56.989 TEST_HEADER include/spdk/version.h 00:03:56.989 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:56.989 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:56.989 TEST_HEADER include/spdk/vhost.h 00:03:56.989 TEST_HEADER include/spdk/vmd.h 00:03:56.989 TEST_HEADER include/spdk/xor.h 00:03:56.989 TEST_HEADER include/spdk/zipf.h 00:03:56.989 CXX test/cpp_headers/accel.o 00:03:56.989 LINK nvmf_tgt 00:03:56.989 LINK poller_perf 00:03:56.989 LINK zipf 00:03:56.989 LINK spdk_trace_record 00:03:56.989 LINK bdev_svc 00:03:57.248 LINK spdk_trace 00:03:57.248 CXX test/cpp_headers/accel_module.o 00:03:57.248 CC test/env/vtophys/vtophys.o 00:03:57.248 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:57.248 CXX test/cpp_headers/assert.o 00:03:57.248 CC examples/ioat/perf/perf.o 00:03:57.248 CC examples/vmd/lsvmd/lsvmd.o 00:03:57.248 LINK vtophys 00:03:57.506 CC examples/idxd/perf/perf.o 00:03:57.506 LINK test_dma 00:03:57.506 LINK env_dpdk_post_init 00:03:57.506 CC app/iscsi_tgt/iscsi_tgt.o 00:03:57.506 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:57.506 LINK mem_callbacks 00:03:57.506 LINK lsvmd 00:03:57.506 CXX test/cpp_headers/barrier.o 00:03:57.506 CXX test/cpp_headers/base64.o 00:03:57.506 LINK ioat_perf 00:03:57.765 LINK iscsi_tgt 00:03:57.765 CC app/spdk_tgt/spdk_tgt.o 00:03:57.765 CC test/env/memory/memory_ut.o 00:03:57.765 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:57.765 CXX test/cpp_headers/bdev.o 00:03:57.765 CC examples/vmd/led/led.o 00:03:57.765 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:57.765 LINK idxd_perf 00:03:57.765 CC examples/ioat/verify/verify.o 00:03:58.024 LINK spdk_tgt 00:03:58.024 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:58.024 LINK led 00:03:58.024 LINK nvme_fuzz 00:03:58.024 CXX test/cpp_headers/bdev_module.o 00:03:58.024 CC test/event/event_perf/event_perf.o 00:03:58.024 LINK verify 00:03:58.024 CC test/nvme/aer/aer.o 00:03:58.024 CXX test/cpp_headers/bdev_zone.o 00:03:58.282 LINK event_perf 00:03:58.282 CC app/spdk_lspci/spdk_lspci.o 00:03:58.282 CC test/accel/dif/dif.o 00:03:58.282 CXX test/cpp_headers/bit_array.o 00:03:58.282 CC test/blobfs/mkfs/mkfs.o 00:03:58.282 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:58.282 LINK vhost_fuzz 00:03:58.541 LINK spdk_lspci 00:03:58.541 LINK aer 00:03:58.541 CC test/event/reactor/reactor.o 00:03:58.541 CXX test/cpp_headers/bit_pool.o 00:03:58.541 LINK mkfs 00:03:58.541 LINK interrupt_tgt 00:03:58.541 LINK reactor 00:03:58.541 CC app/spdk_nvme_perf/perf.o 00:03:58.849 CXX test/cpp_headers/blob_bdev.o 00:03:58.849 CC test/nvme/reset/reset.o 00:03:58.849 CC test/env/pci/pci_ut.o 00:03:58.849 CC test/event/reactor_perf/reactor_perf.o 00:03:58.849 CXX test/cpp_headers/blobfs_bdev.o 00:03:58.849 CC test/lvol/esnap/esnap.o 00:03:58.849 LINK memory_ut 00:03:59.107 LINK reactor_perf 00:03:59.107 LINK reset 00:03:59.107 CC examples/thread/thread/thread_ex.o 00:03:59.107 CXX test/cpp_headers/blobfs.o 00:03:59.108 LINK dif 00:03:59.366 CC test/event/app_repeat/app_repeat.o 00:03:59.366 CXX test/cpp_headers/blob.o 00:03:59.366 LINK pci_ut 00:03:59.366 CC examples/sock/hello_world/hello_sock.o 00:03:59.366 CC test/nvme/sgl/sgl.o 00:03:59.366 LINK thread 00:03:59.366 LINK app_repeat 00:03:59.366 CC test/event/scheduler/scheduler.o 00:03:59.366 CXX test/cpp_headers/conf.o 00:03:59.625 CXX test/cpp_headers/config.o 00:03:59.625 CXX test/cpp_headers/cpuset.o 00:03:59.625 LINK hello_sock 00:03:59.625 LINK spdk_nvme_perf 00:03:59.625 LINK sgl 00:03:59.625 CC test/nvme/e2edp/nvme_dp.o 00:03:59.625 CC test/nvme/overhead/overhead.o 00:03:59.625 LINK iscsi_fuzz 00:03:59.625 LINK scheduler 00:03:59.625 CXX test/cpp_headers/crc16.o 00:03:59.625 CC test/nvme/err_injection/err_injection.o 00:03:59.883 CC app/spdk_nvme_identify/identify.o 00:03:59.883 CC app/spdk_nvme_discover/discovery_aer.o 00:03:59.883 CC examples/accel/perf/accel_perf.o 00:03:59.883 CXX test/cpp_headers/crc32.o 00:03:59.883 LINK err_injection 00:03:59.883 LINK nvme_dp 00:03:59.883 LINK overhead 00:03:59.883 CC test/app/histogram_perf/histogram_perf.o 00:04:00.142 LINK spdk_nvme_discover 00:04:00.142 CXX test/cpp_headers/crc64.o 00:04:00.142 CC test/bdev/bdevio/bdevio.o 00:04:00.142 LINK histogram_perf 00:04:00.142 CC test/nvme/startup/startup.o 00:04:00.142 CXX test/cpp_headers/dif.o 00:04:00.401 CC app/spdk_top/spdk_top.o 00:04:00.401 CC examples/nvme/hello_world/hello_world.o 00:04:00.401 CC examples/blob/hello_world/hello_blob.o 00:04:00.401 LINK startup 00:04:00.401 CC test/app/jsoncat/jsoncat.o 00:04:00.401 CXX test/cpp_headers/dma.o 00:04:00.401 LINK accel_perf 00:04:00.658 LINK bdevio 00:04:00.658 LINK jsoncat 00:04:00.659 LINK hello_world 00:04:00.659 LINK hello_blob 00:04:00.659 CXX test/cpp_headers/endian.o 00:04:00.659 CXX test/cpp_headers/env_dpdk.o 00:04:00.659 CC test/nvme/reserve/reserve.o 00:04:00.917 CXX test/cpp_headers/env.o 00:04:00.917 CC test/app/stub/stub.o 00:04:00.917 CC examples/nvme/reconnect/reconnect.o 00:04:00.917 LINK reserve 00:04:00.917 LINK spdk_nvme_identify 00:04:00.917 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:00.917 CC examples/blob/cli/blobcli.o 00:04:00.917 CC app/vhost/vhost.o 00:04:00.917 CXX test/cpp_headers/event.o 00:04:00.917 LINK stub 00:04:01.175 CC test/nvme/simple_copy/simple_copy.o 00:04:01.175 CXX test/cpp_headers/fd_group.o 00:04:01.175 LINK vhost 00:04:01.175 CC test/nvme/connect_stress/connect_stress.o 00:04:01.175 CXX test/cpp_headers/fd.o 00:04:01.175 LINK reconnect 00:04:01.434 LINK spdk_top 00:04:01.434 CXX test/cpp_headers/file.o 00:04:01.434 CXX test/cpp_headers/fsdev.o 00:04:01.434 LINK simple_copy 00:04:01.434 CC test/nvme/boot_partition/boot_partition.o 00:04:01.434 LINK connect_stress 00:04:01.434 LINK nvme_manage 00:04:01.434 LINK blobcli 00:04:01.434 CXX test/cpp_headers/fsdev_module.o 00:04:01.434 CXX test/cpp_headers/ftl.o 00:04:01.693 LINK boot_partition 00:04:01.693 CC app/spdk_dd/spdk_dd.o 00:04:01.693 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:01.693 CC examples/nvme/arbitration/arbitration.o 00:04:01.693 CC app/fio/nvme/fio_plugin.o 00:04:01.693 CC app/fio/bdev/fio_plugin.o 00:04:01.693 CC test/nvme/compliance/nvme_compliance.o 00:04:01.693 CXX test/cpp_headers/fuse_dispatcher.o 00:04:01.693 CC test/nvme/fused_ordering/fused_ordering.o 00:04:01.952 CC examples/nvme/hotplug/hotplug.o 00:04:01.952 LINK hello_fsdev 00:04:01.952 CXX test/cpp_headers/gpt_spec.o 00:04:01.952 LINK spdk_dd 00:04:01.952 LINK fused_ordering 00:04:01.952 LINK arbitration 00:04:02.211 LINK hotplug 00:04:02.211 CXX test/cpp_headers/hexlify.o 00:04:02.211 LINK nvme_compliance 00:04:02.211 CXX test/cpp_headers/histogram_data.o 00:04:02.212 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:02.212 CC examples/nvme/abort/abort.o 00:04:02.212 CXX test/cpp_headers/idxd.o 00:04:02.212 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:02.212 LINK spdk_nvme 00:04:02.212 LINK spdk_bdev 00:04:02.471 CC test/nvme/fdp/fdp.o 00:04:02.471 LINK cmb_copy 00:04:02.471 CC examples/bdev/hello_world/hello_bdev.o 00:04:02.471 CXX test/cpp_headers/idxd_spec.o 00:04:02.471 CC test/nvme/cuse/cuse.o 00:04:02.471 LINK doorbell_aers 00:04:02.471 CC examples/bdev/bdevperf/bdevperf.o 00:04:02.471 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:02.731 CXX test/cpp_headers/init.o 00:04:02.731 CXX test/cpp_headers/ioat.o 00:04:02.731 LINK abort 00:04:02.731 LINK hello_bdev 00:04:02.731 CXX test/cpp_headers/ioat_spec.o 00:04:02.731 LINK pmr_persistence 00:04:02.731 LINK fdp 00:04:02.731 CXX test/cpp_headers/iscsi_spec.o 00:04:02.731 CXX test/cpp_headers/json.o 00:04:03.007 CXX test/cpp_headers/jsonrpc.o 00:04:03.007 CXX test/cpp_headers/keyring.o 00:04:03.007 CXX test/cpp_headers/keyring_module.o 00:04:03.007 CXX test/cpp_headers/likely.o 00:04:03.007 CXX test/cpp_headers/log.o 00:04:03.007 CXX test/cpp_headers/lvol.o 00:04:03.007 CXX test/cpp_headers/md5.o 00:04:03.007 CXX test/cpp_headers/memory.o 00:04:03.007 CXX test/cpp_headers/mmio.o 00:04:03.007 CXX test/cpp_headers/nbd.o 00:04:03.007 CXX test/cpp_headers/net.o 00:04:03.007 CXX test/cpp_headers/notify.o 00:04:03.007 CXX test/cpp_headers/nvme.o 00:04:03.007 CXX test/cpp_headers/nvme_intel.o 00:04:03.265 CXX test/cpp_headers/nvme_ocssd.o 00:04:03.265 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:03.265 CXX test/cpp_headers/nvme_spec.o 00:04:03.265 CXX test/cpp_headers/nvme_zns.o 00:04:03.265 CXX test/cpp_headers/nvmf_cmd.o 00:04:03.265 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:03.265 CXX test/cpp_headers/nvmf.o 00:04:03.265 CXX test/cpp_headers/nvmf_spec.o 00:04:03.524 CXX test/cpp_headers/nvmf_transport.o 00:04:03.524 CXX test/cpp_headers/opal.o 00:04:03.524 CXX test/cpp_headers/opal_spec.o 00:04:03.524 CXX test/cpp_headers/pci_ids.o 00:04:03.524 CXX test/cpp_headers/pipe.o 00:04:03.524 LINK bdevperf 00:04:03.524 CXX test/cpp_headers/queue.o 00:04:03.524 CXX test/cpp_headers/reduce.o 00:04:03.524 CXX test/cpp_headers/rpc.o 00:04:03.524 CXX test/cpp_headers/scheduler.o 00:04:03.524 CXX test/cpp_headers/scsi.o 00:04:03.524 CXX test/cpp_headers/scsi_spec.o 00:04:03.783 CXX test/cpp_headers/sock.o 00:04:03.783 CXX test/cpp_headers/stdinc.o 00:04:03.783 CXX test/cpp_headers/string.o 00:04:03.783 CXX test/cpp_headers/thread.o 00:04:03.783 CXX test/cpp_headers/trace.o 00:04:03.783 CXX test/cpp_headers/trace_parser.o 00:04:03.783 CXX test/cpp_headers/tree.o 00:04:03.783 CXX test/cpp_headers/ublk.o 00:04:03.783 LINK cuse 00:04:03.783 CXX test/cpp_headers/util.o 00:04:04.042 CXX test/cpp_headers/uuid.o 00:04:04.042 CXX test/cpp_headers/version.o 00:04:04.042 CXX test/cpp_headers/vfio_user_pci.o 00:04:04.042 CXX test/cpp_headers/vfio_user_spec.o 00:04:04.042 CXX test/cpp_headers/vhost.o 00:04:04.042 CC examples/nvmf/nvmf/nvmf.o 00:04:04.042 CXX test/cpp_headers/vmd.o 00:04:04.042 CXX test/cpp_headers/xor.o 00:04:04.042 CXX test/cpp_headers/zipf.o 00:04:04.300 LINK nvmf 00:04:05.237 LINK esnap 00:04:05.496 00:04:05.496 real 1m21.710s 00:04:05.496 user 7m24.278s 00:04:05.496 sys 1m50.929s 00:04:05.496 04:27:54 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:05.496 ************************************ 00:04:05.496 END TEST make 00:04:05.496 ************************************ 00:04:05.496 04:27:54 make -- common/autotest_common.sh@10 -- $ set +x 00:04:05.496 04:27:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:05.496 04:27:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:05.496 04:27:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:05.497 04:27:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.497 04:27:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:05.497 04:27:54 -- pm/common@44 -- $ pid=5272 00:04:05.497 04:27:54 -- pm/common@50 -- $ kill -TERM 5272 00:04:05.497 04:27:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.497 04:27:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:05.497 04:27:54 -- pm/common@44 -- $ pid=5273 00:04:05.497 04:27:54 -- pm/common@50 -- $ kill -TERM 5273 00:04:05.755 04:27:55 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:05.755 04:27:55 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:05.755 04:27:55 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:05.755 04:27:55 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:05.755 04:27:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.755 04:27:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.755 04:27:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.755 04:27:55 -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.755 04:27:55 -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.755 04:27:55 -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.755 04:27:55 -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.755 04:27:55 -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.755 04:27:55 -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.755 04:27:55 -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.756 04:27:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.756 04:27:55 -- scripts/common.sh@344 -- # case "$op" in 00:04:05.756 04:27:55 -- scripts/common.sh@345 -- # : 1 00:04:05.756 04:27:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.756 04:27:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.756 04:27:55 -- scripts/common.sh@365 -- # decimal 1 00:04:05.756 04:27:55 -- scripts/common.sh@353 -- # local d=1 00:04:05.756 04:27:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.756 04:27:55 -- scripts/common.sh@355 -- # echo 1 00:04:05.756 04:27:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.756 04:27:55 -- scripts/common.sh@366 -- # decimal 2 00:04:05.756 04:27:55 -- scripts/common.sh@353 -- # local d=2 00:04:05.756 04:27:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.756 04:27:55 -- scripts/common.sh@355 -- # echo 2 00:04:05.756 04:27:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.756 04:27:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.756 04:27:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.756 04:27:55 -- scripts/common.sh@368 -- # return 0 00:04:05.756 04:27:55 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.756 04:27:55 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:05.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.756 --rc genhtml_branch_coverage=1 00:04:05.756 --rc genhtml_function_coverage=1 00:04:05.756 --rc genhtml_legend=1 00:04:05.756 --rc geninfo_all_blocks=1 00:04:05.756 --rc geninfo_unexecuted_blocks=1 00:04:05.756 00:04:05.756 ' 00:04:05.756 04:27:55 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:05.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.756 --rc genhtml_branch_coverage=1 00:04:05.756 --rc genhtml_function_coverage=1 00:04:05.756 --rc genhtml_legend=1 00:04:05.756 --rc geninfo_all_blocks=1 00:04:05.756 --rc geninfo_unexecuted_blocks=1 00:04:05.756 00:04:05.756 ' 00:04:05.756 04:27:55 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:05.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.756 --rc genhtml_branch_coverage=1 00:04:05.756 --rc genhtml_function_coverage=1 00:04:05.756 --rc genhtml_legend=1 00:04:05.756 --rc geninfo_all_blocks=1 00:04:05.756 --rc geninfo_unexecuted_blocks=1 00:04:05.756 00:04:05.756 ' 00:04:05.756 04:27:55 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:05.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.756 --rc genhtml_branch_coverage=1 00:04:05.756 --rc genhtml_function_coverage=1 00:04:05.756 --rc genhtml_legend=1 00:04:05.756 --rc geninfo_all_blocks=1 00:04:05.756 --rc geninfo_unexecuted_blocks=1 00:04:05.756 00:04:05.756 ' 00:04:05.756 04:27:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:05.756 04:27:55 -- nvmf/common.sh@7 -- # uname -s 00:04:05.756 04:27:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.756 04:27:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.756 04:27:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.756 04:27:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.756 04:27:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.756 04:27:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.756 04:27:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.756 04:27:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.756 04:27:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.756 04:27:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.756 04:27:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ba16f2d3-e337-44f2-8c24-0537a184f995 00:04:05.756 04:27:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=ba16f2d3-e337-44f2-8c24-0537a184f995 00:04:05.756 04:27:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.756 04:27:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.756 04:27:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:05.756 04:27:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.756 04:27:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:05.756 04:27:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:05.756 04:27:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.756 04:27:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.756 04:27:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.756 04:27:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.756 04:27:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.756 04:27:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.756 04:27:55 -- paths/export.sh@5 -- # export PATH 00:04:05.756 04:27:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.756 04:27:55 -- nvmf/common.sh@51 -- # : 0 00:04:05.756 04:27:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:05.756 04:27:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:05.756 04:27:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.756 04:27:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.756 04:27:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.756 04:27:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:05.756 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:05.756 04:27:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:05.756 04:27:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:05.756 04:27:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:05.756 04:27:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:05.756 04:27:55 -- spdk/autotest.sh@32 -- # uname -s 00:04:05.756 04:27:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:05.756 04:27:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:05.756 04:27:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:05.756 04:27:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:05.756 04:27:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:05.756 04:27:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:05.756 04:27:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:05.756 04:27:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:05.756 04:27:55 -- spdk/autotest.sh@48 -- # udevadm_pid=55102 00:04:05.756 04:27:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:05.756 04:27:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:05.756 04:27:55 -- pm/common@17 -- # local monitor 00:04:05.756 04:27:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.756 04:27:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.756 04:27:55 -- pm/common@25 -- # sleep 1 00:04:05.756 04:27:55 -- pm/common@21 -- # date +%s 00:04:05.756 04:27:55 -- pm/common@21 -- # date +%s 00:04:05.757 04:27:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728966475 00:04:05.757 04:27:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728966475 00:04:06.016 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728966475_collect-vmstat.pm.log 00:04:06.016 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728966475_collect-cpu-load.pm.log 00:04:06.952 04:27:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:06.952 04:27:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:06.952 04:27:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.952 04:27:56 -- common/autotest_common.sh@10 -- # set +x 00:04:06.952 04:27:56 -- spdk/autotest.sh@59 -- # create_test_list 00:04:06.953 04:27:56 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:06.953 04:27:56 -- common/autotest_common.sh@10 -- # set +x 00:04:06.953 04:27:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:06.953 04:27:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:06.953 04:27:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:06.953 04:27:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:06.953 04:27:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:06.953 04:27:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:06.953 04:27:56 -- common/autotest_common.sh@1455 -- # uname 00:04:06.953 04:27:56 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:06.953 04:27:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:06.953 04:27:56 -- common/autotest_common.sh@1475 -- # uname 00:04:06.953 04:27:56 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:06.953 04:27:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:06.953 04:27:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:06.953 lcov: LCOV version 1.15 00:04:06.953 04:27:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:25.040 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:25.040 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:37.249 04:28:26 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:37.249 04:28:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:37.249 04:28:26 -- common/autotest_common.sh@10 -- # set +x 00:04:37.249 04:28:26 -- spdk/autotest.sh@78 -- # rm -f 00:04:37.249 04:28:26 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.480 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:38.480 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:38.480 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:38.738 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:38.738 04:28:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:38.738 04:28:28 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:38.738 04:28:28 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:38.738 04:28:28 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:38.738 04:28:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:38.738 04:28:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:38.738 04:28:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:38.738 04:28:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:38.738 04:28:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:38.738 04:28:28 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:38.738 04:28:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:38.738 04:28:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:38.738 04:28:28 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:38.738 04:28:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:38.738 04:28:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:38.738 04:28:28 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:38.738 04:28:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:38.738 04:28:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:38.738 04:28:28 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:38.738 04:28:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:38.738 04:28:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:38.738 04:28:28 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:38.738 04:28:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:38.738 04:28:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:38.738 04:28:28 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:38.738 04:28:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:38.738 04:28:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:38.738 04:28:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:38.738 04:28:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.738 04:28:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:38.738 04:28:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:38.738 04:28:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:38.738 04:28:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:38.738 No valid GPT data, bailing 00:04:38.738 04:28:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:38.738 04:28:28 -- scripts/common.sh@394 -- # pt= 00:04:38.738 04:28:28 -- scripts/common.sh@395 -- # return 1 00:04:38.738 04:28:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:38.738 1+0 records in 00:04:38.738 1+0 records out 00:04:38.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200035 s, 52.4 MB/s 00:04:38.738 04:28:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.738 04:28:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:38.738 04:28:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:38.738 04:28:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:38.738 04:28:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:38.738 No valid GPT data, bailing 00:04:38.738 04:28:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:38.738 04:28:28 -- scripts/common.sh@394 -- # pt= 00:04:38.738 04:28:28 -- scripts/common.sh@395 -- # return 1 00:04:38.738 04:28:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:38.738 1+0 records in 00:04:38.738 1+0 records out 00:04:38.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00637905 s, 164 MB/s 00:04:38.738 04:28:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.738 04:28:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:38.738 04:28:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:38.738 04:28:28 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:38.738 04:28:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:38.996 No valid GPT data, bailing 00:04:38.996 04:28:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:38.996 04:28:28 -- scripts/common.sh@394 -- # pt= 00:04:38.996 04:28:28 -- scripts/common.sh@395 -- # return 1 00:04:38.996 04:28:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:38.996 1+0 records in 00:04:38.996 1+0 records out 00:04:38.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041861 s, 250 MB/s 00:04:38.996 04:28:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.996 04:28:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:38.996 04:28:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:38.996 04:28:28 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:38.996 04:28:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:38.996 No valid GPT data, bailing 00:04:38.996 04:28:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:38.996 04:28:28 -- scripts/common.sh@394 -- # pt= 00:04:38.996 04:28:28 -- scripts/common.sh@395 -- # return 1 00:04:38.996 04:28:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:38.996 1+0 records in 00:04:38.996 1+0 records out 00:04:38.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635833 s, 165 MB/s 00:04:38.996 04:28:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.996 04:28:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:38.996 04:28:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:38.996 04:28:28 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:38.996 04:28:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:38.996 No valid GPT data, bailing 00:04:38.996 04:28:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:38.996 04:28:28 -- scripts/common.sh@394 -- # pt= 00:04:38.996 04:28:28 -- scripts/common.sh@395 -- # return 1 00:04:38.996 04:28:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:38.996 1+0 records in 00:04:38.996 1+0 records out 00:04:38.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00566287 s, 185 MB/s 00:04:38.996 04:28:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.996 04:28:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:38.996 04:28:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:38.996 04:28:28 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:38.996 04:28:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:39.255 No valid GPT data, bailing 00:04:39.255 04:28:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:39.255 04:28:28 -- scripts/common.sh@394 -- # pt= 00:04:39.255 04:28:28 -- scripts/common.sh@395 -- # return 1 00:04:39.255 04:28:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:39.255 1+0 records in 00:04:39.255 1+0 records out 00:04:39.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633208 s, 166 MB/s 00:04:39.255 04:28:28 -- spdk/autotest.sh@105 -- # sync 00:04:39.255 04:28:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:39.255 04:28:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:39.255 04:28:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:42.565 04:28:31 -- spdk/autotest.sh@111 -- # uname -s 00:04:42.565 04:28:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:42.565 04:28:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:42.565 04:28:31 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:42.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.134 Hugepages 00:04:43.134 node hugesize free / total 00:04:43.134 node0 1048576kB 0 / 0 00:04:43.134 node0 2048kB 0 / 0 00:04:43.134 00:04:43.134 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.392 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:43.392 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:43.651 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:43.651 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:43.651 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:43.651 04:28:33 -- spdk/autotest.sh@117 -- # uname -s 00:04:43.651 04:28:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:43.651 04:28:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:43.651 04:28:33 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.156 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.156 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.156 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.156 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.416 04:28:34 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:46.373 04:28:35 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:46.373 04:28:35 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:46.373 04:28:35 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:46.373 04:28:35 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:46.373 04:28:35 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:46.373 04:28:35 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:46.373 04:28:35 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.373 04:28:35 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:46.373 04:28:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:46.373 04:28:35 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:46.373 04:28:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:46.373 04:28:35 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.941 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.199 Waiting for block devices as requested 00:04:47.199 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.458 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.458 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.717 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:52.991 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:52.991 04:28:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:52.991 04:28:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:52.991 04:28:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:52.991 04:28:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:52.991 04:28:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:52.991 04:28:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:52.991 04:28:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:52.991 04:28:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:52.991 04:28:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1541 -- # continue 00:04:52.991 04:28:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:52.991 04:28:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:52.991 04:28:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:52.991 04:28:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:52.991 04:28:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:52.991 04:28:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:52.991 04:28:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:52.991 04:28:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:52.991 04:28:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1541 -- # continue 00:04:52.991 04:28:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:52.991 04:28:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:52.991 04:28:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:52.991 04:28:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:52.991 04:28:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1541 -- # continue 00:04:52.991 04:28:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:52.991 04:28:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:52.991 04:28:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:52.991 04:28:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:52.991 04:28:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:52.991 04:28:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:52.991 04:28:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:52.991 04:28:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:52.991 04:28:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:52.991 04:28:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:52.991 04:28:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:52.991 04:28:42 -- common/autotest_common.sh@1541 -- # continue 00:04:52.991 04:28:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:52.991 04:28:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.991 04:28:42 -- common/autotest_common.sh@10 -- # set +x 00:04:52.991 04:28:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:52.991 04:28:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.991 04:28:42 -- common/autotest_common.sh@10 -- # set +x 00:04:52.991 04:28:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.496 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.496 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.496 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.496 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.755 04:28:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:54.755 04:28:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:54.755 04:28:44 -- common/autotest_common.sh@10 -- # set +x 00:04:54.755 04:28:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:54.755 04:28:44 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:54.755 04:28:44 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:54.755 04:28:44 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:54.755 04:28:44 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:54.755 04:28:44 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:54.755 04:28:44 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:54.755 04:28:44 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:54.755 04:28:44 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:54.755 04:28:44 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:54.755 04:28:44 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.755 04:28:44 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.755 04:28:44 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:54.755 04:28:44 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:54.755 04:28:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:54.755 04:28:44 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:54.755 04:28:44 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:54.755 04:28:44 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:54.755 04:28:44 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.755 04:28:44 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:54.755 04:28:44 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:54.755 04:28:44 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:54.755 04:28:44 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.755 04:28:44 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:54.755 04:28:44 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:54.755 04:28:44 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:54.755 04:28:44 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.755 04:28:44 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:54.755 04:28:44 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:54.755 04:28:44 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:54.755 04:28:44 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.755 04:28:44 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:54.755 04:28:44 -- common/autotest_common.sh@1570 -- # return 0 00:04:54.755 04:28:44 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:54.755 04:28:44 -- common/autotest_common.sh@1578 -- # return 0 00:04:54.755 04:28:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:54.755 04:28:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:54.755 04:28:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:54.755 04:28:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:54.755 04:28:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:54.755 04:28:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.755 04:28:44 -- common/autotest_common.sh@10 -- # set +x 00:04:54.755 04:28:44 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:54.755 04:28:44 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:54.755 04:28:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.755 04:28:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.755 04:28:44 -- common/autotest_common.sh@10 -- # set +x 00:04:54.755 ************************************ 00:04:54.755 START TEST env 00:04:54.755 ************************************ 00:04:54.755 04:28:44 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:55.013 * Looking for test storage... 00:04:55.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:55.013 04:28:44 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.013 04:28:44 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.013 04:28:44 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.013 04:28:44 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.013 04:28:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.013 04:28:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.013 04:28:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.013 04:28:44 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.013 04:28:44 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.013 04:28:44 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.013 04:28:44 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.013 04:28:44 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.013 04:28:44 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.014 04:28:44 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.014 04:28:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.014 04:28:44 env -- scripts/common.sh@344 -- # case "$op" in 00:04:55.014 04:28:44 env -- scripts/common.sh@345 -- # : 1 00:04:55.014 04:28:44 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.014 04:28:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.014 04:28:44 env -- scripts/common.sh@365 -- # decimal 1 00:04:55.014 04:28:44 env -- scripts/common.sh@353 -- # local d=1 00:04:55.014 04:28:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.014 04:28:44 env -- scripts/common.sh@355 -- # echo 1 00:04:55.014 04:28:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.014 04:28:44 env -- scripts/common.sh@366 -- # decimal 2 00:04:55.014 04:28:44 env -- scripts/common.sh@353 -- # local d=2 00:04:55.014 04:28:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.014 04:28:44 env -- scripts/common.sh@355 -- # echo 2 00:04:55.014 04:28:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.014 04:28:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.014 04:28:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.014 04:28:44 env -- scripts/common.sh@368 -- # return 0 00:04:55.014 04:28:44 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.014 04:28:44 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.014 --rc genhtml_branch_coverage=1 00:04:55.014 --rc genhtml_function_coverage=1 00:04:55.014 --rc genhtml_legend=1 00:04:55.014 --rc geninfo_all_blocks=1 00:04:55.014 --rc geninfo_unexecuted_blocks=1 00:04:55.014 00:04:55.014 ' 00:04:55.014 04:28:44 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.014 --rc genhtml_branch_coverage=1 00:04:55.014 --rc genhtml_function_coverage=1 00:04:55.014 --rc genhtml_legend=1 00:04:55.014 --rc geninfo_all_blocks=1 00:04:55.014 --rc geninfo_unexecuted_blocks=1 00:04:55.014 00:04:55.014 ' 00:04:55.014 04:28:44 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.014 --rc genhtml_branch_coverage=1 00:04:55.014 --rc genhtml_function_coverage=1 00:04:55.014 --rc genhtml_legend=1 00:04:55.014 --rc geninfo_all_blocks=1 00:04:55.014 --rc geninfo_unexecuted_blocks=1 00:04:55.014 00:04:55.014 ' 00:04:55.014 04:28:44 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.014 --rc genhtml_branch_coverage=1 00:04:55.014 --rc genhtml_function_coverage=1 00:04:55.014 --rc genhtml_legend=1 00:04:55.014 --rc geninfo_all_blocks=1 00:04:55.014 --rc geninfo_unexecuted_blocks=1 00:04:55.014 00:04:55.014 ' 00:04:55.014 04:28:44 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:55.014 04:28:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.014 04:28:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.014 04:28:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.014 ************************************ 00:04:55.014 START TEST env_memory 00:04:55.014 ************************************ 00:04:55.014 04:28:44 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:55.014 00:04:55.014 00:04:55.014 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.014 http://cunit.sourceforge.net/ 00:04:55.014 00:04:55.014 00:04:55.014 Suite: memory 00:04:55.272 Test: alloc and free memory map ...[2024-10-15 04:28:44.534705] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:55.272 passed 00:04:55.272 Test: mem map translation ...[2024-10-15 04:28:44.579907] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:55.272 [2024-10-15 04:28:44.580117] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:55.272 [2024-10-15 04:28:44.580310] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:55.272 [2024-10-15 04:28:44.580372] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:55.272 passed 00:04:55.272 Test: mem map registration ...[2024-10-15 04:28:44.648785] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:55.272 [2024-10-15 04:28:44.648989] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:55.272 passed 00:04:55.272 Test: mem map adjacent registrations ...passed 00:04:55.272 00:04:55.272 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.272 suites 1 1 n/a 0 0 00:04:55.272 tests 4 4 4 0 0 00:04:55.272 asserts 152 152 152 0 n/a 00:04:55.272 00:04:55.272 Elapsed time = 0.243 seconds 00:04:55.272 00:04:55.272 ************************************ 00:04:55.272 END TEST env_memory 00:04:55.272 ************************************ 00:04:55.272 real 0m0.299s 00:04:55.272 user 0m0.258s 00:04:55.272 sys 0m0.029s 00:04:55.272 04:28:44 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.272 04:28:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:55.530 04:28:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:55.530 04:28:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.530 04:28:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.530 04:28:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.530 ************************************ 00:04:55.530 START TEST env_vtophys 00:04:55.530 ************************************ 00:04:55.530 04:28:44 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:55.530 EAL: lib.eal log level changed from notice to debug 00:04:55.530 EAL: Detected lcore 0 as core 0 on socket 0 00:04:55.530 EAL: Detected lcore 1 as core 0 on socket 0 00:04:55.530 EAL: Detected lcore 2 as core 0 on socket 0 00:04:55.530 EAL: Detected lcore 3 as core 0 on socket 0 00:04:55.530 EAL: Detected lcore 4 as core 0 on socket 0 00:04:55.530 EAL: Detected lcore 5 as core 0 on socket 0 00:04:55.530 EAL: Detected lcore 6 as core 0 on socket 0 00:04:55.530 EAL: Detected lcore 7 as core 0 on socket 0 00:04:55.530 EAL: Detected lcore 8 as core 0 on socket 0 00:04:55.530 EAL: Detected lcore 9 as core 0 on socket 0 00:04:55.530 EAL: Maximum logical cores by configuration: 128 00:04:55.530 EAL: Detected CPU lcores: 10 00:04:55.530 EAL: Detected NUMA nodes: 1 00:04:55.530 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:55.530 EAL: Detected shared linkage of DPDK 00:04:55.530 EAL: No shared files mode enabled, IPC will be disabled 00:04:55.530 EAL: Selected IOVA mode 'PA' 00:04:55.530 EAL: Probing VFIO support... 00:04:55.530 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:55.530 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:55.530 EAL: Ask a virtual area of 0x2e000 bytes 00:04:55.530 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:55.530 EAL: Setting up physically contiguous memory... 00:04:55.530 EAL: Setting maximum number of open files to 524288 00:04:55.530 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:55.530 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:55.530 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.531 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:55.531 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.531 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.531 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:55.531 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:55.531 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.531 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:55.531 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.531 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.531 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:55.531 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:55.531 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.531 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:55.531 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.531 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.531 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:55.531 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:55.531 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.531 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:55.531 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.531 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.531 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:55.531 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:55.531 EAL: Hugepages will be freed exactly as allocated. 00:04:55.531 EAL: No shared files mode enabled, IPC is disabled 00:04:55.531 EAL: No shared files mode enabled, IPC is disabled 00:04:55.531 EAL: TSC frequency is ~2490000 KHz 00:04:55.531 EAL: Main lcore 0 is ready (tid=7f5607117a40;cpuset=[0]) 00:04:55.531 EAL: Trying to obtain current memory policy. 00:04:55.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.531 EAL: Restoring previous memory policy: 0 00:04:55.531 EAL: request: mp_malloc_sync 00:04:55.531 EAL: No shared files mode enabled, IPC is disabled 00:04:55.531 EAL: Heap on socket 0 was expanded by 2MB 00:04:55.531 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:55.788 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:55.788 EAL: Mem event callback 'spdk:(nil)' registered 00:04:55.788 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:55.788 00:04:55.788 00:04:55.788 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.788 http://cunit.sourceforge.net/ 00:04:55.788 00:04:55.788 00:04:55.788 Suite: components_suite 00:04:56.046 Test: vtophys_malloc_test ...passed 00:04:56.046 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:56.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.046 EAL: Restoring previous memory policy: 4 00:04:56.046 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.046 EAL: request: mp_malloc_sync 00:04:56.046 EAL: No shared files mode enabled, IPC is disabled 00:04:56.046 EAL: Heap on socket 0 was expanded by 4MB 00:04:56.046 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.046 EAL: request: mp_malloc_sync 00:04:56.046 EAL: No shared files mode enabled, IPC is disabled 00:04:56.046 EAL: Heap on socket 0 was shrunk by 4MB 00:04:56.046 EAL: Trying to obtain current memory policy. 00:04:56.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.046 EAL: Restoring previous memory policy: 4 00:04:56.046 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.046 EAL: request: mp_malloc_sync 00:04:56.046 EAL: No shared files mode enabled, IPC is disabled 00:04:56.046 EAL: Heap on socket 0 was expanded by 6MB 00:04:56.046 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.046 EAL: request: mp_malloc_sync 00:04:56.046 EAL: No shared files mode enabled, IPC is disabled 00:04:56.046 EAL: Heap on socket 0 was shrunk by 6MB 00:04:56.046 EAL: Trying to obtain current memory policy. 00:04:56.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.046 EAL: Restoring previous memory policy: 4 00:04:56.046 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.046 EAL: request: mp_malloc_sync 00:04:56.046 EAL: No shared files mode enabled, IPC is disabled 00:04:56.046 EAL: Heap on socket 0 was expanded by 10MB 00:04:56.046 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.046 EAL: request: mp_malloc_sync 00:04:56.303 EAL: No shared files mode enabled, IPC is disabled 00:04:56.303 EAL: Heap on socket 0 was shrunk by 10MB 00:04:56.303 EAL: Trying to obtain current memory policy. 00:04:56.303 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.303 EAL: Restoring previous memory policy: 4 00:04:56.303 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.303 EAL: request: mp_malloc_sync 00:04:56.303 EAL: No shared files mode enabled, IPC is disabled 00:04:56.303 EAL: Heap on socket 0 was expanded by 18MB 00:04:56.303 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.303 EAL: request: mp_malloc_sync 00:04:56.303 EAL: No shared files mode enabled, IPC is disabled 00:04:56.303 EAL: Heap on socket 0 was shrunk by 18MB 00:04:56.303 EAL: Trying to obtain current memory policy. 00:04:56.303 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.303 EAL: Restoring previous memory policy: 4 00:04:56.303 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.303 EAL: request: mp_malloc_sync 00:04:56.303 EAL: No shared files mode enabled, IPC is disabled 00:04:56.303 EAL: Heap on socket 0 was expanded by 34MB 00:04:56.303 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.303 EAL: request: mp_malloc_sync 00:04:56.303 EAL: No shared files mode enabled, IPC is disabled 00:04:56.303 EAL: Heap on socket 0 was shrunk by 34MB 00:04:56.303 EAL: Trying to obtain current memory policy. 00:04:56.303 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.303 EAL: Restoring previous memory policy: 4 00:04:56.303 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.303 EAL: request: mp_malloc_sync 00:04:56.303 EAL: No shared files mode enabled, IPC is disabled 00:04:56.303 EAL: Heap on socket 0 was expanded by 66MB 00:04:56.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.561 EAL: request: mp_malloc_sync 00:04:56.561 EAL: No shared files mode enabled, IPC is disabled 00:04:56.561 EAL: Heap on socket 0 was shrunk by 66MB 00:04:56.561 EAL: Trying to obtain current memory policy. 00:04:56.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.561 EAL: Restoring previous memory policy: 4 00:04:56.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.561 EAL: request: mp_malloc_sync 00:04:56.561 EAL: No shared files mode enabled, IPC is disabled 00:04:56.561 EAL: Heap on socket 0 was expanded by 130MB 00:04:56.819 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.819 EAL: request: mp_malloc_sync 00:04:56.819 EAL: No shared files mode enabled, IPC is disabled 00:04:56.819 EAL: Heap on socket 0 was shrunk by 130MB 00:04:57.076 EAL: Trying to obtain current memory policy. 00:04:57.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.076 EAL: Restoring previous memory policy: 4 00:04:57.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.076 EAL: request: mp_malloc_sync 00:04:57.076 EAL: No shared files mode enabled, IPC is disabled 00:04:57.076 EAL: Heap on socket 0 was expanded by 258MB 00:04:57.642 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.642 EAL: request: mp_malloc_sync 00:04:57.642 EAL: No shared files mode enabled, IPC is disabled 00:04:57.642 EAL: Heap on socket 0 was shrunk by 258MB 00:04:58.210 EAL: Trying to obtain current memory policy. 00:04:58.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.210 EAL: Restoring previous memory policy: 4 00:04:58.210 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.210 EAL: request: mp_malloc_sync 00:04:58.210 EAL: No shared files mode enabled, IPC is disabled 00:04:58.210 EAL: Heap on socket 0 was expanded by 514MB 00:04:59.149 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.149 EAL: request: mp_malloc_sync 00:04:59.149 EAL: No shared files mode enabled, IPC is disabled 00:04:59.149 EAL: Heap on socket 0 was shrunk by 514MB 00:05:00.087 EAL: Trying to obtain current memory policy. 00:05:00.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.347 EAL: Restoring previous memory policy: 4 00:05:00.347 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.347 EAL: request: mp_malloc_sync 00:05:00.347 EAL: No shared files mode enabled, IPC is disabled 00:05:00.347 EAL: Heap on socket 0 was expanded by 1026MB 00:05:02.254 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.254 EAL: request: mp_malloc_sync 00:05:02.254 EAL: No shared files mode enabled, IPC is disabled 00:05:02.254 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:04.158 passed 00:05:04.158 00:05:04.158 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.158 suites 1 1 n/a 0 0 00:05:04.158 tests 2 2 2 0 0 00:05:04.158 asserts 5607 5607 5607 0 n/a 00:05:04.158 00:05:04.158 Elapsed time = 8.239 seconds 00:05:04.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.158 EAL: request: mp_malloc_sync 00:05:04.158 EAL: No shared files mode enabled, IPC is disabled 00:05:04.158 EAL: Heap on socket 0 was shrunk by 2MB 00:05:04.159 EAL: No shared files mode enabled, IPC is disabled 00:05:04.159 EAL: No shared files mode enabled, IPC is disabled 00:05:04.159 EAL: No shared files mode enabled, IPC is disabled 00:05:04.159 00:05:04.159 real 0m8.575s 00:05:04.159 user 0m7.519s 00:05:04.159 sys 0m0.891s 00:05:04.159 ************************************ 00:05:04.159 END TEST env_vtophys 00:05:04.159 ************************************ 00:05:04.159 04:28:53 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.159 04:28:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:04.159 04:28:53 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:04.159 04:28:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.159 04:28:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.159 04:28:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.159 ************************************ 00:05:04.159 START TEST env_pci 00:05:04.159 ************************************ 00:05:04.159 04:28:53 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:04.159 00:05:04.159 00:05:04.159 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.159 http://cunit.sourceforge.net/ 00:05:04.159 00:05:04.159 00:05:04.159 Suite: pci 00:05:04.159 Test: pci_hook ...[2024-10-15 04:28:53.519787] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57943 has claimed it 00:05:04.159 EAL: Cannot find device (10000:00:01.0) 00:05:04.159 EAL: Failed to attach device on primary process 00:05:04.159 passed 00:05:04.159 00:05:04.159 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.159 suites 1 1 n/a 0 0 00:05:04.159 tests 1 1 1 0 0 00:05:04.159 asserts 25 25 25 0 n/a 00:05:04.159 00:05:04.159 Elapsed time = 0.008 seconds 00:05:04.159 00:05:04.159 real 0m0.110s 00:05:04.159 user 0m0.046s 00:05:04.159 sys 0m0.062s 00:05:04.159 ************************************ 00:05:04.159 END TEST env_pci 00:05:04.159 ************************************ 00:05:04.159 04:28:53 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.159 04:28:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:04.159 04:28:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:04.159 04:28:53 env -- env/env.sh@15 -- # uname 00:05:04.159 04:28:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:04.159 04:28:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:04.159 04:28:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.159 04:28:53 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:04.159 04:28:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.159 04:28:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.159 ************************************ 00:05:04.159 START TEST env_dpdk_post_init 00:05:04.159 ************************************ 00:05:04.159 04:28:53 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.418 EAL: Detected CPU lcores: 10 00:05:04.418 EAL: Detected NUMA nodes: 1 00:05:04.418 EAL: Detected shared linkage of DPDK 00:05:04.418 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.418 EAL: Selected IOVA mode 'PA' 00:05:04.418 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.418 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:04.418 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:04.418 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:04.418 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:04.678 Starting DPDK initialization... 00:05:04.678 Starting SPDK post initialization... 00:05:04.678 SPDK NVMe probe 00:05:04.678 Attaching to 0000:00:10.0 00:05:04.678 Attaching to 0000:00:11.0 00:05:04.678 Attaching to 0000:00:12.0 00:05:04.678 Attaching to 0000:00:13.0 00:05:04.678 Attached to 0000:00:10.0 00:05:04.678 Attached to 0000:00:11.0 00:05:04.678 Attached to 0000:00:13.0 00:05:04.678 Attached to 0000:00:12.0 00:05:04.678 Cleaning up... 00:05:04.678 00:05:04.678 real 0m0.302s 00:05:04.678 user 0m0.097s 00:05:04.678 sys 0m0.108s 00:05:04.678 04:28:53 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.678 04:28:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.678 ************************************ 00:05:04.678 END TEST env_dpdk_post_init 00:05:04.678 ************************************ 00:05:04.678 04:28:54 env -- env/env.sh@26 -- # uname 00:05:04.678 04:28:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:04.678 04:28:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:04.678 04:28:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.678 04:28:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.678 04:28:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.678 ************************************ 00:05:04.678 START TEST env_mem_callbacks 00:05:04.678 ************************************ 00:05:04.678 04:28:54 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:04.678 EAL: Detected CPU lcores: 10 00:05:04.678 EAL: Detected NUMA nodes: 1 00:05:04.678 EAL: Detected shared linkage of DPDK 00:05:04.678 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.678 EAL: Selected IOVA mode 'PA' 00:05:04.937 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.937 00:05:04.937 00:05:04.937 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.937 http://cunit.sourceforge.net/ 00:05:04.937 00:05:04.937 00:05:04.937 Suite: memory 00:05:04.937 Test: test ... 00:05:04.937 register 0x200000200000 2097152 00:05:04.937 malloc 3145728 00:05:04.937 register 0x200000400000 4194304 00:05:04.937 buf 0x2000004fffc0 len 3145728 PASSED 00:05:04.937 malloc 64 00:05:04.937 buf 0x2000004ffec0 len 64 PASSED 00:05:04.937 malloc 4194304 00:05:04.937 register 0x200000800000 6291456 00:05:04.937 buf 0x2000009fffc0 len 4194304 PASSED 00:05:04.937 free 0x2000004fffc0 3145728 00:05:04.937 free 0x2000004ffec0 64 00:05:04.937 unregister 0x200000400000 4194304 PASSED 00:05:04.937 free 0x2000009fffc0 4194304 00:05:04.937 unregister 0x200000800000 6291456 PASSED 00:05:04.937 malloc 8388608 00:05:04.937 register 0x200000400000 10485760 00:05:04.937 buf 0x2000005fffc0 len 8388608 PASSED 00:05:04.937 free 0x2000005fffc0 8388608 00:05:04.937 unregister 0x200000400000 10485760 PASSED 00:05:04.937 passed 00:05:04.937 00:05:04.937 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.937 suites 1 1 n/a 0 0 00:05:04.937 tests 1 1 1 0 0 00:05:04.937 asserts 15 15 15 0 n/a 00:05:04.937 00:05:04.937 Elapsed time = 0.082 seconds 00:05:04.937 00:05:04.937 real 0m0.313s 00:05:04.937 user 0m0.122s 00:05:04.937 sys 0m0.086s 00:05:04.937 04:28:54 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.937 04:28:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:04.937 ************************************ 00:05:04.937 END TEST env_mem_callbacks 00:05:04.937 ************************************ 00:05:04.937 00:05:04.937 real 0m10.144s 00:05:04.937 user 0m8.263s 00:05:04.937 sys 0m1.508s 00:05:04.937 04:28:54 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.938 ************************************ 00:05:04.938 END TEST env 00:05:04.938 ************************************ 00:05:04.938 04:28:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.196 04:28:54 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:05.196 04:28:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.196 04:28:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.196 04:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:05.196 ************************************ 00:05:05.196 START TEST rpc 00:05:05.196 ************************************ 00:05:05.196 04:28:54 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:05.196 * Looking for test storage... 00:05:05.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:05.196 04:28:54 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:05.196 04:28:54 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:05.196 04:28:54 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:05.196 04:28:54 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:05.196 04:28:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.196 04:28:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.196 04:28:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.196 04:28:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.196 04:28:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.196 04:28:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.196 04:28:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.197 04:28:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.197 04:28:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.197 04:28:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.197 04:28:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.197 04:28:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:05.197 04:28:54 rpc -- scripts/common.sh@345 -- # : 1 00:05:05.197 04:28:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.197 04:28:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.197 04:28:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:05.197 04:28:54 rpc -- scripts/common.sh@353 -- # local d=1 00:05:05.197 04:28:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.197 04:28:54 rpc -- scripts/common.sh@355 -- # echo 1 00:05:05.197 04:28:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.197 04:28:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:05.197 04:28:54 rpc -- scripts/common.sh@353 -- # local d=2 00:05:05.197 04:28:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.197 04:28:54 rpc -- scripts/common.sh@355 -- # echo 2 00:05:05.456 04:28:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.456 04:28:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.456 04:28:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.456 04:28:54 rpc -- scripts/common.sh@368 -- # return 0 00:05:05.456 04:28:54 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.456 04:28:54 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:05.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.456 --rc genhtml_branch_coverage=1 00:05:05.456 --rc genhtml_function_coverage=1 00:05:05.456 --rc genhtml_legend=1 00:05:05.456 --rc geninfo_all_blocks=1 00:05:05.456 --rc geninfo_unexecuted_blocks=1 00:05:05.456 00:05:05.456 ' 00:05:05.456 04:28:54 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:05.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.456 --rc genhtml_branch_coverage=1 00:05:05.456 --rc genhtml_function_coverage=1 00:05:05.456 --rc genhtml_legend=1 00:05:05.456 --rc geninfo_all_blocks=1 00:05:05.456 --rc geninfo_unexecuted_blocks=1 00:05:05.456 00:05:05.456 ' 00:05:05.456 04:28:54 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:05.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.456 --rc genhtml_branch_coverage=1 00:05:05.456 --rc genhtml_function_coverage=1 00:05:05.456 --rc genhtml_legend=1 00:05:05.456 --rc geninfo_all_blocks=1 00:05:05.456 --rc geninfo_unexecuted_blocks=1 00:05:05.456 00:05:05.456 ' 00:05:05.456 04:28:54 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:05.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.456 --rc genhtml_branch_coverage=1 00:05:05.456 --rc genhtml_function_coverage=1 00:05:05.456 --rc genhtml_legend=1 00:05:05.456 --rc geninfo_all_blocks=1 00:05:05.456 --rc geninfo_unexecuted_blocks=1 00:05:05.456 00:05:05.456 ' 00:05:05.456 04:28:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58070 00:05:05.456 04:28:54 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:05.456 04:28:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.456 04:28:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58070 00:05:05.456 04:28:54 rpc -- common/autotest_common.sh@831 -- # '[' -z 58070 ']' 00:05:05.456 04:28:54 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.456 04:28:54 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.456 04:28:54 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.456 04:28:54 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.456 04:28:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.456 [2024-10-15 04:28:54.835658] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:05.456 [2024-10-15 04:28:54.836022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58070 ] 00:05:05.716 [2024-10-15 04:28:55.015952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.716 [2024-10-15 04:28:55.133475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:05.716 [2024-10-15 04:28:55.133778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58070' to capture a snapshot of events at runtime. 00:05:05.716 [2024-10-15 04:28:55.133991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:05.716 [2024-10-15 04:28:55.134063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:05.716 [2024-10-15 04:28:55.134096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58070 for offline analysis/debug. 00:05:05.716 [2024-10-15 04:28:55.135526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.653 04:28:56 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.653 04:28:56 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:06.653 04:28:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:06.653 04:28:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:06.653 04:28:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:06.653 04:28:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:06.653 04:28:56 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.653 04:28:56 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.654 04:28:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.654 ************************************ 00:05:06.654 START TEST rpc_integrity 00:05:06.654 ************************************ 00:05:06.654 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:06.654 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:06.654 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.654 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.654 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.654 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:06.654 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:06.913 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:06.913 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.913 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:06.913 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.913 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:06.913 { 00:05:06.913 "name": "Malloc0", 00:05:06.913 "aliases": [ 00:05:06.913 "74844fab-51ac-40f4-8c61-2b5928d10cf7" 00:05:06.913 ], 00:05:06.913 "product_name": "Malloc disk", 00:05:06.913 "block_size": 512, 00:05:06.913 "num_blocks": 16384, 00:05:06.913 "uuid": "74844fab-51ac-40f4-8c61-2b5928d10cf7", 00:05:06.913 "assigned_rate_limits": { 00:05:06.913 "rw_ios_per_sec": 0, 00:05:06.913 "rw_mbytes_per_sec": 0, 00:05:06.913 "r_mbytes_per_sec": 0, 00:05:06.913 "w_mbytes_per_sec": 0 00:05:06.913 }, 00:05:06.913 "claimed": false, 00:05:06.913 "zoned": false, 00:05:06.913 "supported_io_types": { 00:05:06.913 "read": true, 00:05:06.913 "write": true, 00:05:06.913 "unmap": true, 00:05:06.913 "flush": true, 00:05:06.913 "reset": true, 00:05:06.913 "nvme_admin": false, 00:05:06.913 "nvme_io": false, 00:05:06.913 "nvme_io_md": false, 00:05:06.913 "write_zeroes": true, 00:05:06.913 "zcopy": true, 00:05:06.913 "get_zone_info": false, 00:05:06.913 "zone_management": false, 00:05:06.913 "zone_append": false, 00:05:06.913 "compare": false, 00:05:06.913 "compare_and_write": false, 00:05:06.913 "abort": true, 00:05:06.913 "seek_hole": false, 00:05:06.913 "seek_data": false, 00:05:06.913 "copy": true, 00:05:06.913 "nvme_iov_md": false 00:05:06.913 }, 00:05:06.913 "memory_domains": [ 00:05:06.913 { 00:05:06.913 "dma_device_id": "system", 00:05:06.913 "dma_device_type": 1 00:05:06.913 }, 00:05:06.913 { 00:05:06.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.913 "dma_device_type": 2 00:05:06.913 } 00:05:06.913 ], 00:05:06.913 "driver_specific": {} 00:05:06.913 } 00:05:06.913 ]' 00:05:06.913 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:06.913 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:06.913 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.913 [2024-10-15 04:28:56.276207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:06.913 [2024-10-15 04:28:56.276288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:06.913 [2024-10-15 04:28:56.276320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:06.913 [2024-10-15 04:28:56.276337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:06.913 [2024-10-15 04:28:56.279157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:06.913 [2024-10-15 04:28:56.279211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:06.913 Passthru0 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.913 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.913 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.913 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:06.913 { 00:05:06.913 "name": "Malloc0", 00:05:06.913 "aliases": [ 00:05:06.913 "74844fab-51ac-40f4-8c61-2b5928d10cf7" 00:05:06.913 ], 00:05:06.913 "product_name": "Malloc disk", 00:05:06.913 "block_size": 512, 00:05:06.913 "num_blocks": 16384, 00:05:06.913 "uuid": "74844fab-51ac-40f4-8c61-2b5928d10cf7", 00:05:06.913 "assigned_rate_limits": { 00:05:06.913 "rw_ios_per_sec": 0, 00:05:06.913 "rw_mbytes_per_sec": 0, 00:05:06.913 "r_mbytes_per_sec": 0, 00:05:06.913 "w_mbytes_per_sec": 0 00:05:06.913 }, 00:05:06.913 "claimed": true, 00:05:06.913 "claim_type": "exclusive_write", 00:05:06.913 "zoned": false, 00:05:06.913 "supported_io_types": { 00:05:06.913 "read": true, 00:05:06.913 "write": true, 00:05:06.913 "unmap": true, 00:05:06.913 "flush": true, 00:05:06.913 "reset": true, 00:05:06.913 "nvme_admin": false, 00:05:06.913 "nvme_io": false, 00:05:06.913 "nvme_io_md": false, 00:05:06.913 "write_zeroes": true, 00:05:06.913 "zcopy": true, 00:05:06.913 "get_zone_info": false, 00:05:06.913 "zone_management": false, 00:05:06.913 "zone_append": false, 00:05:06.913 "compare": false, 00:05:06.913 "compare_and_write": false, 00:05:06.913 "abort": true, 00:05:06.913 "seek_hole": false, 00:05:06.913 "seek_data": false, 00:05:06.913 "copy": true, 00:05:06.913 "nvme_iov_md": false 00:05:06.913 }, 00:05:06.913 "memory_domains": [ 00:05:06.913 { 00:05:06.913 "dma_device_id": "system", 00:05:06.913 "dma_device_type": 1 00:05:06.913 }, 00:05:06.913 { 00:05:06.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.913 "dma_device_type": 2 00:05:06.913 } 00:05:06.913 ], 00:05:06.913 "driver_specific": {} 00:05:06.913 }, 00:05:06.913 { 00:05:06.913 "name": "Passthru0", 00:05:06.913 "aliases": [ 00:05:06.913 "0e1e5329-1d2a-5277-b3f4-cd50324a8063" 00:05:06.913 ], 00:05:06.913 "product_name": "passthru", 00:05:06.913 "block_size": 512, 00:05:06.913 "num_blocks": 16384, 00:05:06.913 "uuid": "0e1e5329-1d2a-5277-b3f4-cd50324a8063", 00:05:06.913 "assigned_rate_limits": { 00:05:06.913 "rw_ios_per_sec": 0, 00:05:06.913 "rw_mbytes_per_sec": 0, 00:05:06.913 "r_mbytes_per_sec": 0, 00:05:06.913 "w_mbytes_per_sec": 0 00:05:06.913 }, 00:05:06.913 "claimed": false, 00:05:06.913 "zoned": false, 00:05:06.914 "supported_io_types": { 00:05:06.914 "read": true, 00:05:06.914 "write": true, 00:05:06.914 "unmap": true, 00:05:06.914 "flush": true, 00:05:06.914 "reset": true, 00:05:06.914 "nvme_admin": false, 00:05:06.914 "nvme_io": false, 00:05:06.914 "nvme_io_md": false, 00:05:06.914 "write_zeroes": true, 00:05:06.914 "zcopy": true, 00:05:06.914 "get_zone_info": false, 00:05:06.914 "zone_management": false, 00:05:06.914 "zone_append": false, 00:05:06.914 "compare": false, 00:05:06.914 "compare_and_write": false, 00:05:06.914 "abort": true, 00:05:06.914 "seek_hole": false, 00:05:06.914 "seek_data": false, 00:05:06.914 "copy": true, 00:05:06.914 "nvme_iov_md": false 00:05:06.914 }, 00:05:06.914 "memory_domains": [ 00:05:06.914 { 00:05:06.914 "dma_device_id": "system", 00:05:06.914 "dma_device_type": 1 00:05:06.914 }, 00:05:06.914 { 00:05:06.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.914 "dma_device_type": 2 00:05:06.914 } 00:05:06.914 ], 00:05:06.914 "driver_specific": { 00:05:06.914 "passthru": { 00:05:06.914 "name": "Passthru0", 00:05:06.914 "base_bdev_name": "Malloc0" 00:05:06.914 } 00:05:06.914 } 00:05:06.914 } 00:05:06.914 ]' 00:05:06.914 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:06.914 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:06.914 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:06.914 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.914 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.914 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.914 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:06.914 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.914 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.914 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.914 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:06.914 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.914 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.173 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.173 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.173 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:07.173 ************************************ 00:05:07.173 END TEST rpc_integrity 00:05:07.173 ************************************ 00:05:07.174 04:28:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.174 00:05:07.174 real 0m0.366s 00:05:07.174 user 0m0.189s 00:05:07.174 sys 0m0.072s 00:05:07.174 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.174 04:28:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.174 04:28:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:07.174 04:28:56 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.174 04:28:56 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.174 04:28:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.174 ************************************ 00:05:07.174 START TEST rpc_plugins 00:05:07.174 ************************************ 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:07.174 04:28:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.174 04:28:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:07.174 04:28:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.174 04:28:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:07.174 { 00:05:07.174 "name": "Malloc1", 00:05:07.174 "aliases": [ 00:05:07.174 "7a381e7e-715a-4073-be43-fa1ae34bdb95" 00:05:07.174 ], 00:05:07.174 "product_name": "Malloc disk", 00:05:07.174 "block_size": 4096, 00:05:07.174 "num_blocks": 256, 00:05:07.174 "uuid": "7a381e7e-715a-4073-be43-fa1ae34bdb95", 00:05:07.174 "assigned_rate_limits": { 00:05:07.174 "rw_ios_per_sec": 0, 00:05:07.174 "rw_mbytes_per_sec": 0, 00:05:07.174 "r_mbytes_per_sec": 0, 00:05:07.174 "w_mbytes_per_sec": 0 00:05:07.174 }, 00:05:07.174 "claimed": false, 00:05:07.174 "zoned": false, 00:05:07.174 "supported_io_types": { 00:05:07.174 "read": true, 00:05:07.174 "write": true, 00:05:07.174 "unmap": true, 00:05:07.174 "flush": true, 00:05:07.174 "reset": true, 00:05:07.174 "nvme_admin": false, 00:05:07.174 "nvme_io": false, 00:05:07.174 "nvme_io_md": false, 00:05:07.174 "write_zeroes": true, 00:05:07.174 "zcopy": true, 00:05:07.174 "get_zone_info": false, 00:05:07.174 "zone_management": false, 00:05:07.174 "zone_append": false, 00:05:07.174 "compare": false, 00:05:07.174 "compare_and_write": false, 00:05:07.174 "abort": true, 00:05:07.174 "seek_hole": false, 00:05:07.174 "seek_data": false, 00:05:07.174 "copy": true, 00:05:07.174 "nvme_iov_md": false 00:05:07.174 }, 00:05:07.174 "memory_domains": [ 00:05:07.174 { 00:05:07.174 "dma_device_id": "system", 00:05:07.174 "dma_device_type": 1 00:05:07.174 }, 00:05:07.174 { 00:05:07.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.174 "dma_device_type": 2 00:05:07.174 } 00:05:07.174 ], 00:05:07.174 "driver_specific": {} 00:05:07.174 } 00:05:07.174 ]' 00:05:07.174 04:28:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:07.174 04:28:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:07.174 04:28:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.174 04:28:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.174 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.174 04:28:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:07.174 04:28:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:07.434 ************************************ 00:05:07.434 END TEST rpc_plugins 00:05:07.434 ************************************ 00:05:07.434 04:28:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:07.434 00:05:07.434 real 0m0.168s 00:05:07.434 user 0m0.086s 00:05:07.434 sys 0m0.036s 00:05:07.434 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.434 04:28:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 04:28:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:07.434 04:28:56 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.434 04:28:56 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.434 04:28:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 ************************************ 00:05:07.434 START TEST rpc_trace_cmd_test 00:05:07.434 ************************************ 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:07.434 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58070", 00:05:07.434 "tpoint_group_mask": "0x8", 00:05:07.434 "iscsi_conn": { 00:05:07.434 "mask": "0x2", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "scsi": { 00:05:07.434 "mask": "0x4", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "bdev": { 00:05:07.434 "mask": "0x8", 00:05:07.434 "tpoint_mask": "0xffffffffffffffff" 00:05:07.434 }, 00:05:07.434 "nvmf_rdma": { 00:05:07.434 "mask": "0x10", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "nvmf_tcp": { 00:05:07.434 "mask": "0x20", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "ftl": { 00:05:07.434 "mask": "0x40", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "blobfs": { 00:05:07.434 "mask": "0x80", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "dsa": { 00:05:07.434 "mask": "0x200", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "thread": { 00:05:07.434 "mask": "0x400", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "nvme_pcie": { 00:05:07.434 "mask": "0x800", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "iaa": { 00:05:07.434 "mask": "0x1000", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "nvme_tcp": { 00:05:07.434 "mask": "0x2000", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "bdev_nvme": { 00:05:07.434 "mask": "0x4000", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "sock": { 00:05:07.434 "mask": "0x8000", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "blob": { 00:05:07.434 "mask": "0x10000", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "bdev_raid": { 00:05:07.434 "mask": "0x20000", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 }, 00:05:07.434 "scheduler": { 00:05:07.434 "mask": "0x40000", 00:05:07.434 "tpoint_mask": "0x0" 00:05:07.434 } 00:05:07.434 }' 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:07.434 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:07.693 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:07.693 04:28:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:07.693 ************************************ 00:05:07.693 END TEST rpc_trace_cmd_test 00:05:07.693 ************************************ 00:05:07.693 04:28:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:07.693 00:05:07.693 real 0m0.240s 00:05:07.693 user 0m0.183s 00:05:07.693 sys 0m0.045s 00:05:07.693 04:28:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.693 04:28:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:07.693 04:28:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:07.693 04:28:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:07.693 04:28:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:07.693 04:28:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.693 04:28:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.693 04:28:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.693 ************************************ 00:05:07.693 START TEST rpc_daemon_integrity 00:05:07.693 ************************************ 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.693 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:07.693 { 00:05:07.693 "name": "Malloc2", 00:05:07.693 "aliases": [ 00:05:07.693 "191824b8-40b8-4a1b-bce4-de66c83deb74" 00:05:07.693 ], 00:05:07.693 "product_name": "Malloc disk", 00:05:07.693 "block_size": 512, 00:05:07.693 "num_blocks": 16384, 00:05:07.693 "uuid": "191824b8-40b8-4a1b-bce4-de66c83deb74", 00:05:07.693 "assigned_rate_limits": { 00:05:07.693 "rw_ios_per_sec": 0, 00:05:07.693 "rw_mbytes_per_sec": 0, 00:05:07.693 "r_mbytes_per_sec": 0, 00:05:07.693 "w_mbytes_per_sec": 0 00:05:07.693 }, 00:05:07.693 "claimed": false, 00:05:07.693 "zoned": false, 00:05:07.693 "supported_io_types": { 00:05:07.693 "read": true, 00:05:07.693 "write": true, 00:05:07.693 "unmap": true, 00:05:07.693 "flush": true, 00:05:07.693 "reset": true, 00:05:07.693 "nvme_admin": false, 00:05:07.693 "nvme_io": false, 00:05:07.693 "nvme_io_md": false, 00:05:07.693 "write_zeroes": true, 00:05:07.693 "zcopy": true, 00:05:07.693 "get_zone_info": false, 00:05:07.694 "zone_management": false, 00:05:07.694 "zone_append": false, 00:05:07.694 "compare": false, 00:05:07.694 "compare_and_write": false, 00:05:07.694 "abort": true, 00:05:07.694 "seek_hole": false, 00:05:07.694 "seek_data": false, 00:05:07.694 "copy": true, 00:05:07.694 "nvme_iov_md": false 00:05:07.694 }, 00:05:07.694 "memory_domains": [ 00:05:07.694 { 00:05:07.694 "dma_device_id": "system", 00:05:07.694 "dma_device_type": 1 00:05:07.694 }, 00:05:07.694 { 00:05:07.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.694 "dma_device_type": 2 00:05:07.694 } 00:05:07.694 ], 00:05:07.694 "driver_specific": {} 00:05:07.694 } 00:05:07.694 ]' 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.954 [2024-10-15 04:28:57.237763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:07.954 [2024-10-15 04:28:57.237850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:07.954 [2024-10-15 04:28:57.237876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:07.954 [2024-10-15 04:28:57.237890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.954 [2024-10-15 04:28:57.240493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.954 [2024-10-15 04:28:57.240652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.954 Passthru0 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.954 { 00:05:07.954 "name": "Malloc2", 00:05:07.954 "aliases": [ 00:05:07.954 "191824b8-40b8-4a1b-bce4-de66c83deb74" 00:05:07.954 ], 00:05:07.954 "product_name": "Malloc disk", 00:05:07.954 "block_size": 512, 00:05:07.954 "num_blocks": 16384, 00:05:07.954 "uuid": "191824b8-40b8-4a1b-bce4-de66c83deb74", 00:05:07.954 "assigned_rate_limits": { 00:05:07.954 "rw_ios_per_sec": 0, 00:05:07.954 "rw_mbytes_per_sec": 0, 00:05:07.954 "r_mbytes_per_sec": 0, 00:05:07.954 "w_mbytes_per_sec": 0 00:05:07.954 }, 00:05:07.954 "claimed": true, 00:05:07.954 "claim_type": "exclusive_write", 00:05:07.954 "zoned": false, 00:05:07.954 "supported_io_types": { 00:05:07.954 "read": true, 00:05:07.954 "write": true, 00:05:07.954 "unmap": true, 00:05:07.954 "flush": true, 00:05:07.954 "reset": true, 00:05:07.954 "nvme_admin": false, 00:05:07.954 "nvme_io": false, 00:05:07.954 "nvme_io_md": false, 00:05:07.954 "write_zeroes": true, 00:05:07.954 "zcopy": true, 00:05:07.954 "get_zone_info": false, 00:05:07.954 "zone_management": false, 00:05:07.954 "zone_append": false, 00:05:07.954 "compare": false, 00:05:07.954 "compare_and_write": false, 00:05:07.954 "abort": true, 00:05:07.954 "seek_hole": false, 00:05:07.954 "seek_data": false, 00:05:07.954 "copy": true, 00:05:07.954 "nvme_iov_md": false 00:05:07.954 }, 00:05:07.954 "memory_domains": [ 00:05:07.954 { 00:05:07.954 "dma_device_id": "system", 00:05:07.954 "dma_device_type": 1 00:05:07.954 }, 00:05:07.954 { 00:05:07.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.954 "dma_device_type": 2 00:05:07.954 } 00:05:07.954 ], 00:05:07.954 "driver_specific": {} 00:05:07.954 }, 00:05:07.954 { 00:05:07.954 "name": "Passthru0", 00:05:07.954 "aliases": [ 00:05:07.954 "e82896f7-c041-54c9-967a-74488b0c83b1" 00:05:07.954 ], 00:05:07.954 "product_name": "passthru", 00:05:07.954 "block_size": 512, 00:05:07.954 "num_blocks": 16384, 00:05:07.954 "uuid": "e82896f7-c041-54c9-967a-74488b0c83b1", 00:05:07.954 "assigned_rate_limits": { 00:05:07.954 "rw_ios_per_sec": 0, 00:05:07.954 "rw_mbytes_per_sec": 0, 00:05:07.954 "r_mbytes_per_sec": 0, 00:05:07.954 "w_mbytes_per_sec": 0 00:05:07.954 }, 00:05:07.954 "claimed": false, 00:05:07.954 "zoned": false, 00:05:07.954 "supported_io_types": { 00:05:07.954 "read": true, 00:05:07.954 "write": true, 00:05:07.954 "unmap": true, 00:05:07.954 "flush": true, 00:05:07.954 "reset": true, 00:05:07.954 "nvme_admin": false, 00:05:07.954 "nvme_io": false, 00:05:07.954 "nvme_io_md": false, 00:05:07.954 "write_zeroes": true, 00:05:07.954 "zcopy": true, 00:05:07.954 "get_zone_info": false, 00:05:07.954 "zone_management": false, 00:05:07.954 "zone_append": false, 00:05:07.954 "compare": false, 00:05:07.954 "compare_and_write": false, 00:05:07.954 "abort": true, 00:05:07.954 "seek_hole": false, 00:05:07.954 "seek_data": false, 00:05:07.954 "copy": true, 00:05:07.954 "nvme_iov_md": false 00:05:07.954 }, 00:05:07.954 "memory_domains": [ 00:05:07.954 { 00:05:07.954 "dma_device_id": "system", 00:05:07.954 "dma_device_type": 1 00:05:07.954 }, 00:05:07.954 { 00:05:07.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.954 "dma_device_type": 2 00:05:07.954 } 00:05:07.954 ], 00:05:07.954 "driver_specific": { 00:05:07.954 "passthru": { 00:05:07.954 "name": "Passthru0", 00:05:07.954 "base_bdev_name": "Malloc2" 00:05:07.954 } 00:05:07.954 } 00:05:07.954 } 00:05:07.954 ]' 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:07.954 ************************************ 00:05:07.954 END TEST rpc_daemon_integrity 00:05:07.954 ************************************ 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.954 00:05:07.954 real 0m0.333s 00:05:07.954 user 0m0.177s 00:05:07.954 sys 0m0.056s 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.954 04:28:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.214 04:28:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:08.214 04:28:57 rpc -- rpc/rpc.sh@84 -- # killprocess 58070 00:05:08.214 04:28:57 rpc -- common/autotest_common.sh@950 -- # '[' -z 58070 ']' 00:05:08.214 04:28:57 rpc -- common/autotest_common.sh@954 -- # kill -0 58070 00:05:08.214 04:28:57 rpc -- common/autotest_common.sh@955 -- # uname 00:05:08.214 04:28:57 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.214 04:28:57 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58070 00:05:08.214 killing process with pid 58070 00:05:08.214 04:28:57 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.214 04:28:57 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.214 04:28:57 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58070' 00:05:08.214 04:28:57 rpc -- common/autotest_common.sh@969 -- # kill 58070 00:05:08.214 04:28:57 rpc -- common/autotest_common.sh@974 -- # wait 58070 00:05:10.758 00:05:10.758 real 0m5.549s 00:05:10.758 user 0m6.081s 00:05:10.758 sys 0m1.019s 00:05:10.758 ************************************ 00:05:10.758 END TEST rpc 00:05:10.758 ************************************ 00:05:10.758 04:29:00 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.758 04:29:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.758 04:29:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:10.758 04:29:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.758 04:29:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.758 04:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:10.759 ************************************ 00:05:10.759 START TEST skip_rpc 00:05:10.759 ************************************ 00:05:10.759 04:29:00 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:10.759 * Looking for test storage... 00:05:10.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.759 04:29:00 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:10.759 04:29:00 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:10.759 04:29:00 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:11.017 04:29:00 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.017 04:29:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:11.017 04:29:00 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.017 04:29:00 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:11.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.017 --rc genhtml_branch_coverage=1 00:05:11.017 --rc genhtml_function_coverage=1 00:05:11.017 --rc genhtml_legend=1 00:05:11.017 --rc geninfo_all_blocks=1 00:05:11.017 --rc geninfo_unexecuted_blocks=1 00:05:11.017 00:05:11.017 ' 00:05:11.017 04:29:00 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:11.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.017 --rc genhtml_branch_coverage=1 00:05:11.017 --rc genhtml_function_coverage=1 00:05:11.017 --rc genhtml_legend=1 00:05:11.017 --rc geninfo_all_blocks=1 00:05:11.017 --rc geninfo_unexecuted_blocks=1 00:05:11.017 00:05:11.017 ' 00:05:11.017 04:29:00 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:11.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.017 --rc genhtml_branch_coverage=1 00:05:11.017 --rc genhtml_function_coverage=1 00:05:11.017 --rc genhtml_legend=1 00:05:11.017 --rc geninfo_all_blocks=1 00:05:11.017 --rc geninfo_unexecuted_blocks=1 00:05:11.017 00:05:11.017 ' 00:05:11.017 04:29:00 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:11.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.017 --rc genhtml_branch_coverage=1 00:05:11.017 --rc genhtml_function_coverage=1 00:05:11.017 --rc genhtml_legend=1 00:05:11.017 --rc geninfo_all_blocks=1 00:05:11.017 --rc geninfo_unexecuted_blocks=1 00:05:11.017 00:05:11.017 ' 00:05:11.017 04:29:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.017 04:29:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:11.017 04:29:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:11.017 04:29:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.017 04:29:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.017 04:29:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.017 ************************************ 00:05:11.017 START TEST skip_rpc 00:05:11.017 ************************************ 00:05:11.017 04:29:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:11.017 04:29:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58311 00:05:11.017 04:29:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:11.017 04:29:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.017 04:29:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:11.017 [2024-10-15 04:29:00.416323] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:11.017 [2024-10-15 04:29:00.416688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58311 ] 00:05:11.276 [2024-10-15 04:29:00.591953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.276 [2024-10-15 04:29:00.714120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58311 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58311 ']' 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58311 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58311 00:05:16.549 killing process with pid 58311 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58311' 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58311 00:05:16.549 04:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58311 00:05:18.509 ************************************ 00:05:18.509 END TEST skip_rpc 00:05:18.509 ************************************ 00:05:18.509 00:05:18.509 real 0m7.486s 00:05:18.509 user 0m6.946s 00:05:18.509 sys 0m0.436s 00:05:18.509 04:29:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.509 04:29:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.509 04:29:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:18.509 04:29:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.509 04:29:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.509 04:29:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.509 ************************************ 00:05:18.509 START TEST skip_rpc_with_json 00:05:18.509 ************************************ 00:05:18.509 04:29:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:18.509 04:29:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:18.509 04:29:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58415 00:05:18.509 04:29:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.509 04:29:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.509 04:29:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58415 00:05:18.509 04:29:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58415 ']' 00:05:18.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.509 04:29:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.510 04:29:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.510 04:29:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.510 04:29:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.510 04:29:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.510 [2024-10-15 04:29:07.975592] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:18.510 [2024-10-15 04:29:07.975730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58415 ] 00:05:18.767 [2024-10-15 04:29:08.148996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.767 [2024-10-15 04:29:08.265909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.701 [2024-10-15 04:29:09.177936] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:19.701 request: 00:05:19.701 { 00:05:19.701 "trtype": "tcp", 00:05:19.701 "method": "nvmf_get_transports", 00:05:19.701 "req_id": 1 00:05:19.701 } 00:05:19.701 Got JSON-RPC error response 00:05:19.701 response: 00:05:19.701 { 00:05:19.701 "code": -19, 00:05:19.701 "message": "No such device" 00:05:19.701 } 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.701 [2024-10-15 04:29:09.194010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.701 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.960 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.960 04:29:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:19.960 { 00:05:19.960 "subsystems": [ 00:05:19.960 { 00:05:19.960 "subsystem": "fsdev", 00:05:19.960 "config": [ 00:05:19.960 { 00:05:19.960 "method": "fsdev_set_opts", 00:05:19.960 "params": { 00:05:19.960 "fsdev_io_pool_size": 65535, 00:05:19.960 "fsdev_io_cache_size": 256 00:05:19.960 } 00:05:19.960 } 00:05:19.960 ] 00:05:19.960 }, 00:05:19.960 { 00:05:19.960 "subsystem": "keyring", 00:05:19.960 "config": [] 00:05:19.960 }, 00:05:19.960 { 00:05:19.960 "subsystem": "iobuf", 00:05:19.960 "config": [ 00:05:19.960 { 00:05:19.960 "method": "iobuf_set_options", 00:05:19.960 "params": { 00:05:19.960 "small_pool_count": 8192, 00:05:19.960 "large_pool_count": 1024, 00:05:19.960 "small_bufsize": 8192, 00:05:19.960 "large_bufsize": 135168 00:05:19.960 } 00:05:19.960 } 00:05:19.960 ] 00:05:19.960 }, 00:05:19.960 { 00:05:19.960 "subsystem": "sock", 00:05:19.960 "config": [ 00:05:19.960 { 00:05:19.960 "method": "sock_set_default_impl", 00:05:19.960 "params": { 00:05:19.960 "impl_name": "posix" 00:05:19.960 } 00:05:19.960 }, 00:05:19.960 { 00:05:19.960 "method": "sock_impl_set_options", 00:05:19.960 "params": { 00:05:19.960 "impl_name": "ssl", 00:05:19.960 "recv_buf_size": 4096, 00:05:19.961 "send_buf_size": 4096, 00:05:19.961 "enable_recv_pipe": true, 00:05:19.961 "enable_quickack": false, 00:05:19.961 "enable_placement_id": 0, 00:05:19.961 "enable_zerocopy_send_server": true, 00:05:19.961 "enable_zerocopy_send_client": false, 00:05:19.961 "zerocopy_threshold": 0, 00:05:19.961 "tls_version": 0, 00:05:19.961 "enable_ktls": false 00:05:19.961 } 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "method": "sock_impl_set_options", 00:05:19.961 "params": { 00:05:19.961 "impl_name": "posix", 00:05:19.961 "recv_buf_size": 2097152, 00:05:19.961 "send_buf_size": 2097152, 00:05:19.961 "enable_recv_pipe": true, 00:05:19.961 "enable_quickack": false, 00:05:19.961 "enable_placement_id": 0, 00:05:19.961 "enable_zerocopy_send_server": true, 00:05:19.961 "enable_zerocopy_send_client": false, 00:05:19.961 "zerocopy_threshold": 0, 00:05:19.961 "tls_version": 0, 00:05:19.961 "enable_ktls": false 00:05:19.961 } 00:05:19.961 } 00:05:19.961 ] 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "subsystem": "vmd", 00:05:19.961 "config": [] 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "subsystem": "accel", 00:05:19.961 "config": [ 00:05:19.961 { 00:05:19.961 "method": "accel_set_options", 00:05:19.961 "params": { 00:05:19.961 "small_cache_size": 128, 00:05:19.961 "large_cache_size": 16, 00:05:19.961 "task_count": 2048, 00:05:19.961 "sequence_count": 2048, 00:05:19.961 "buf_count": 2048 00:05:19.961 } 00:05:19.961 } 00:05:19.961 ] 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "subsystem": "bdev", 00:05:19.961 "config": [ 00:05:19.961 { 00:05:19.961 "method": "bdev_set_options", 00:05:19.961 "params": { 00:05:19.961 "bdev_io_pool_size": 65535, 00:05:19.961 "bdev_io_cache_size": 256, 00:05:19.961 "bdev_auto_examine": true, 00:05:19.961 "iobuf_small_cache_size": 128, 00:05:19.961 "iobuf_large_cache_size": 16 00:05:19.961 } 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "method": "bdev_raid_set_options", 00:05:19.961 "params": { 00:05:19.961 "process_window_size_kb": 1024, 00:05:19.961 "process_max_bandwidth_mb_sec": 0 00:05:19.961 } 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "method": "bdev_iscsi_set_options", 00:05:19.961 "params": { 00:05:19.961 "timeout_sec": 30 00:05:19.961 } 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "method": "bdev_nvme_set_options", 00:05:19.961 "params": { 00:05:19.961 "action_on_timeout": "none", 00:05:19.961 "timeout_us": 0, 00:05:19.961 "timeout_admin_us": 0, 00:05:19.961 "keep_alive_timeout_ms": 10000, 00:05:19.961 "arbitration_burst": 0, 00:05:19.961 "low_priority_weight": 0, 00:05:19.961 "medium_priority_weight": 0, 00:05:19.961 "high_priority_weight": 0, 00:05:19.961 "nvme_adminq_poll_period_us": 10000, 00:05:19.961 "nvme_ioq_poll_period_us": 0, 00:05:19.961 "io_queue_requests": 0, 00:05:19.961 "delay_cmd_submit": true, 00:05:19.961 "transport_retry_count": 4, 00:05:19.961 "bdev_retry_count": 3, 00:05:19.961 "transport_ack_timeout": 0, 00:05:19.961 "ctrlr_loss_timeout_sec": 0, 00:05:19.961 "reconnect_delay_sec": 0, 00:05:19.961 "fast_io_fail_timeout_sec": 0, 00:05:19.961 "disable_auto_failback": false, 00:05:19.961 "generate_uuids": false, 00:05:19.961 "transport_tos": 0, 00:05:19.961 "nvme_error_stat": false, 00:05:19.961 "rdma_srq_size": 0, 00:05:19.961 "io_path_stat": false, 00:05:19.961 "allow_accel_sequence": false, 00:05:19.961 "rdma_max_cq_size": 0, 00:05:19.961 "rdma_cm_event_timeout_ms": 0, 00:05:19.961 "dhchap_digests": [ 00:05:19.961 "sha256", 00:05:19.961 "sha384", 00:05:19.961 "sha512" 00:05:19.961 ], 00:05:19.961 "dhchap_dhgroups": [ 00:05:19.961 "null", 00:05:19.961 "ffdhe2048", 00:05:19.961 "ffdhe3072", 00:05:19.961 "ffdhe4096", 00:05:19.961 "ffdhe6144", 00:05:19.961 "ffdhe8192" 00:05:19.961 ] 00:05:19.961 } 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "method": "bdev_nvme_set_hotplug", 00:05:19.961 "params": { 00:05:19.961 "period_us": 100000, 00:05:19.961 "enable": false 00:05:19.961 } 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "method": "bdev_wait_for_examine" 00:05:19.961 } 00:05:19.961 ] 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "subsystem": "scsi", 00:05:19.961 "config": null 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "subsystem": "scheduler", 00:05:19.961 "config": [ 00:05:19.961 { 00:05:19.961 "method": "framework_set_scheduler", 00:05:19.961 "params": { 00:05:19.961 "name": "static" 00:05:19.961 } 00:05:19.961 } 00:05:19.961 ] 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "subsystem": "vhost_scsi", 00:05:19.961 "config": [] 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "subsystem": "vhost_blk", 00:05:19.961 "config": [] 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "subsystem": "ublk", 00:05:19.961 "config": [] 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "subsystem": "nbd", 00:05:19.961 "config": [] 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "subsystem": "nvmf", 00:05:19.961 "config": [ 00:05:19.961 { 00:05:19.961 "method": "nvmf_set_config", 00:05:19.961 "params": { 00:05:19.961 "discovery_filter": "match_any", 00:05:19.961 "admin_cmd_passthru": { 00:05:19.961 "identify_ctrlr": false 00:05:19.961 }, 00:05:19.961 "dhchap_digests": [ 00:05:19.961 "sha256", 00:05:19.961 "sha384", 00:05:19.961 "sha512" 00:05:19.961 ], 00:05:19.961 "dhchap_dhgroups": [ 00:05:19.961 "null", 00:05:19.961 "ffdhe2048", 00:05:19.961 "ffdhe3072", 00:05:19.961 "ffdhe4096", 00:05:19.961 "ffdhe6144", 00:05:19.961 "ffdhe8192" 00:05:19.961 ] 00:05:19.961 } 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "method": "nvmf_set_max_subsystems", 00:05:19.961 "params": { 00:05:19.961 "max_subsystems": 1024 00:05:19.961 } 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "method": "nvmf_set_crdt", 00:05:19.961 "params": { 00:05:19.961 "crdt1": 0, 00:05:19.961 "crdt2": 0, 00:05:19.961 "crdt3": 0 00:05:19.961 } 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "method": "nvmf_create_transport", 00:05:19.961 "params": { 00:05:19.961 "trtype": "TCP", 00:05:19.961 "max_queue_depth": 128, 00:05:19.961 "max_io_qpairs_per_ctrlr": 127, 00:05:19.961 "in_capsule_data_size": 4096, 00:05:19.961 "max_io_size": 131072, 00:05:19.961 "io_unit_size": 131072, 00:05:19.961 "max_aq_depth": 128, 00:05:19.961 "num_shared_buffers": 511, 00:05:19.961 "buf_cache_size": 4294967295, 00:05:19.961 "dif_insert_or_strip": false, 00:05:19.961 "zcopy": false, 00:05:19.961 "c2h_success": true, 00:05:19.961 "sock_priority": 0, 00:05:19.961 "abort_timeout_sec": 1, 00:05:19.961 "ack_timeout": 0, 00:05:19.961 "data_wr_pool_size": 0 00:05:19.961 } 00:05:19.961 } 00:05:19.961 ] 00:05:19.961 }, 00:05:19.961 { 00:05:19.961 "subsystem": "iscsi", 00:05:19.961 "config": [ 00:05:19.961 { 00:05:19.961 "method": "iscsi_set_options", 00:05:19.961 "params": { 00:05:19.961 "node_base": "iqn.2016-06.io.spdk", 00:05:19.961 "max_sessions": 128, 00:05:19.961 "max_connections_per_session": 2, 00:05:19.961 "max_queue_depth": 64, 00:05:19.961 "default_time2wait": 2, 00:05:19.961 "default_time2retain": 20, 00:05:19.961 "first_burst_length": 8192, 00:05:19.961 "immediate_data": true, 00:05:19.961 "allow_duplicated_isid": false, 00:05:19.961 "error_recovery_level": 0, 00:05:19.961 "nop_timeout": 60, 00:05:19.961 "nop_in_interval": 30, 00:05:19.961 "disable_chap": false, 00:05:19.961 "require_chap": false, 00:05:19.961 "mutual_chap": false, 00:05:19.961 "chap_group": 0, 00:05:19.961 "max_large_datain_per_connection": 64, 00:05:19.961 "max_r2t_per_connection": 4, 00:05:19.961 "pdu_pool_size": 36864, 00:05:19.961 "immediate_data_pool_size": 16384, 00:05:19.961 "data_out_pool_size": 2048 00:05:19.961 } 00:05:19.961 } 00:05:19.961 ] 00:05:19.961 } 00:05:19.961 ] 00:05:19.961 } 00:05:19.961 04:29:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:19.961 04:29:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58415 00:05:19.961 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58415 ']' 00:05:19.961 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58415 00:05:19.961 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:19.961 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.961 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58415 00:05:19.961 killing process with pid 58415 00:05:19.961 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.962 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.962 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58415' 00:05:19.962 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58415 00:05:19.962 04:29:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58415 00:05:22.524 04:29:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58471 00:05:22.524 04:29:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:22.524 04:29:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:27.784 04:29:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58471 00:05:27.784 04:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58471 ']' 00:05:27.784 04:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58471 00:05:27.784 04:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:27.784 04:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.784 04:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58471 00:05:27.784 killing process with pid 58471 00:05:27.784 04:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.784 04:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.784 04:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58471' 00:05:27.784 04:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58471 00:05:27.784 04:29:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58471 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.370 ************************************ 00:05:30.370 END TEST skip_rpc_with_json 00:05:30.370 ************************************ 00:05:30.370 00:05:30.370 real 0m11.504s 00:05:30.370 user 0m10.913s 00:05:30.370 sys 0m0.910s 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.370 04:29:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:30.370 04:29:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.370 04:29:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.370 04:29:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.370 ************************************ 00:05:30.370 START TEST skip_rpc_with_delay 00:05:30.370 ************************************ 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.370 [2024-10-15 04:29:19.550371] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:30.370 ************************************ 00:05:30.370 END TEST skip_rpc_with_delay 00:05:30.370 ************************************ 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.370 00:05:30.370 real 0m0.176s 00:05:30.370 user 0m0.081s 00:05:30.370 sys 0m0.094s 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.370 04:29:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:30.370 04:29:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:30.370 04:29:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:30.370 04:29:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:30.370 04:29:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.370 04:29:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.370 04:29:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.370 ************************************ 00:05:30.370 START TEST exit_on_failed_rpc_init 00:05:30.370 ************************************ 00:05:30.370 04:29:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:30.370 04:29:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58599 00:05:30.370 04:29:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.370 04:29:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58599 00:05:30.370 04:29:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 58599 ']' 00:05:30.370 04:29:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.370 04:29:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.370 04:29:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.370 04:29:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.370 04:29:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.370 [2024-10-15 04:29:19.804420] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:30.370 [2024-10-15 04:29:19.804727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58599 ] 00:05:30.628 [2024-10-15 04:29:19.975696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.628 [2024-10-15 04:29:20.094223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:31.560 04:29:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.819 [2024-10-15 04:29:21.069312] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:31.819 [2024-10-15 04:29:21.069451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58623 ] 00:05:31.819 [2024-10-15 04:29:21.242565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.078 [2024-10-15 04:29:21.358484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.078 [2024-10-15 04:29:21.358593] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:32.078 [2024-10-15 04:29:21.358611] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:32.078 [2024-10-15 04:29:21.358632] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58599 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 58599 ']' 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 58599 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58599 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58599' 00:05:32.337 killing process with pid 58599 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 58599 00:05:32.337 04:29:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 58599 00:05:34.895 00:05:34.895 real 0m4.382s 00:05:34.895 user 0m4.694s 00:05:34.895 sys 0m0.600s 00:05:34.895 04:29:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.895 ************************************ 00:05:34.895 END TEST exit_on_failed_rpc_init 00:05:34.895 ************************************ 00:05:34.895 04:29:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.895 04:29:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:34.895 ************************************ 00:05:34.895 END TEST skip_rpc 00:05:34.895 ************************************ 00:05:34.895 00:05:34.895 real 0m24.070s 00:05:34.895 user 0m22.845s 00:05:34.895 sys 0m2.360s 00:05:34.895 04:29:24 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.895 04:29:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.895 04:29:24 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.895 04:29:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.895 04:29:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.895 04:29:24 -- common/autotest_common.sh@10 -- # set +x 00:05:34.895 ************************************ 00:05:34.895 START TEST rpc_client 00:05:34.895 ************************************ 00:05:34.895 04:29:24 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.895 * Looking for test storage... 00:05:34.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:34.895 04:29:24 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:34.895 04:29:24 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:34.895 04:29:24 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.155 04:29:24 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.155 04:29:24 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.155 04:29:24 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.155 04:29:24 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.155 04:29:24 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.155 04:29:24 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.155 04:29:24 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.156 04:29:24 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:35.156 04:29:24 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.156 04:29:24 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.156 --rc genhtml_branch_coverage=1 00:05:35.156 --rc genhtml_function_coverage=1 00:05:35.156 --rc genhtml_legend=1 00:05:35.156 --rc geninfo_all_blocks=1 00:05:35.156 --rc geninfo_unexecuted_blocks=1 00:05:35.156 00:05:35.156 ' 00:05:35.156 04:29:24 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.156 --rc genhtml_branch_coverage=1 00:05:35.156 --rc genhtml_function_coverage=1 00:05:35.156 --rc genhtml_legend=1 00:05:35.156 --rc geninfo_all_blocks=1 00:05:35.156 --rc geninfo_unexecuted_blocks=1 00:05:35.156 00:05:35.156 ' 00:05:35.156 04:29:24 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.156 --rc genhtml_branch_coverage=1 00:05:35.156 --rc genhtml_function_coverage=1 00:05:35.156 --rc genhtml_legend=1 00:05:35.156 --rc geninfo_all_blocks=1 00:05:35.156 --rc geninfo_unexecuted_blocks=1 00:05:35.156 00:05:35.156 ' 00:05:35.156 04:29:24 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.156 --rc genhtml_branch_coverage=1 00:05:35.156 --rc genhtml_function_coverage=1 00:05:35.156 --rc genhtml_legend=1 00:05:35.156 --rc geninfo_all_blocks=1 00:05:35.156 --rc geninfo_unexecuted_blocks=1 00:05:35.156 00:05:35.156 ' 00:05:35.156 04:29:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:35.156 OK 00:05:35.156 04:29:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:35.156 00:05:35.156 real 0m0.313s 00:05:35.156 user 0m0.164s 00:05:35.156 sys 0m0.160s 00:05:35.156 04:29:24 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.156 04:29:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:35.156 ************************************ 00:05:35.156 END TEST rpc_client 00:05:35.156 ************************************ 00:05:35.156 04:29:24 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.156 04:29:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.156 04:29:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.156 04:29:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.156 ************************************ 00:05:35.156 START TEST json_config 00:05:35.156 ************************************ 00:05:35.156 04:29:24 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.417 04:29:24 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.417 04:29:24 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.417 04:29:24 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.417 04:29:24 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.417 04:29:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.417 04:29:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.417 04:29:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.417 04:29:24 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.417 04:29:24 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.417 04:29:24 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.417 04:29:24 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.417 04:29:24 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.417 04:29:24 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.417 04:29:24 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.417 04:29:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.417 04:29:24 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:35.417 04:29:24 json_config -- scripts/common.sh@345 -- # : 1 00:05:35.417 04:29:24 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.417 04:29:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.417 04:29:24 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:35.417 04:29:24 json_config -- scripts/common.sh@353 -- # local d=1 00:05:35.417 04:29:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.417 04:29:24 json_config -- scripts/common.sh@355 -- # echo 1 00:05:35.417 04:29:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.417 04:29:24 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:35.417 04:29:24 json_config -- scripts/common.sh@353 -- # local d=2 00:05:35.417 04:29:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.417 04:29:24 json_config -- scripts/common.sh@355 -- # echo 2 00:05:35.417 04:29:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.417 04:29:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.417 04:29:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.417 04:29:24 json_config -- scripts/common.sh@368 -- # return 0 00:05:35.417 04:29:24 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.417 04:29:24 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.417 --rc genhtml_branch_coverage=1 00:05:35.417 --rc genhtml_function_coverage=1 00:05:35.417 --rc genhtml_legend=1 00:05:35.417 --rc geninfo_all_blocks=1 00:05:35.417 --rc geninfo_unexecuted_blocks=1 00:05:35.417 00:05:35.417 ' 00:05:35.417 04:29:24 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.417 --rc genhtml_branch_coverage=1 00:05:35.417 --rc genhtml_function_coverage=1 00:05:35.417 --rc genhtml_legend=1 00:05:35.417 --rc geninfo_all_blocks=1 00:05:35.417 --rc geninfo_unexecuted_blocks=1 00:05:35.417 00:05:35.417 ' 00:05:35.417 04:29:24 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.417 --rc genhtml_branch_coverage=1 00:05:35.417 --rc genhtml_function_coverage=1 00:05:35.417 --rc genhtml_legend=1 00:05:35.417 --rc geninfo_all_blocks=1 00:05:35.417 --rc geninfo_unexecuted_blocks=1 00:05:35.417 00:05:35.417 ' 00:05:35.417 04:29:24 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.417 --rc genhtml_branch_coverage=1 00:05:35.417 --rc genhtml_function_coverage=1 00:05:35.417 --rc genhtml_legend=1 00:05:35.417 --rc geninfo_all_blocks=1 00:05:35.417 --rc geninfo_unexecuted_blocks=1 00:05:35.417 00:05:35.417 ' 00:05:35.417 04:29:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.417 04:29:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:35.417 04:29:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ba16f2d3-e337-44f2-8c24-0537a184f995 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ba16f2d3-e337-44f2-8c24-0537a184f995 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.418 04:29:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.418 04:29:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.418 04:29:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.418 04:29:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.418 04:29:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.418 04:29:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.418 04:29:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.418 04:29:24 json_config -- paths/export.sh@5 -- # export PATH 00:05:35.418 04:29:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@51 -- # : 0 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.418 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.418 04:29:24 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.418 04:29:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.418 04:29:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:35.418 04:29:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:35.418 04:29:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:35.418 04:29:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.418 04:29:24 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:35.418 WARNING: No tests are enabled so not running JSON configuration tests 00:05:35.418 04:29:24 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:35.418 00:05:35.418 real 0m0.233s 00:05:35.418 user 0m0.136s 00:05:35.418 sys 0m0.098s 00:05:35.418 04:29:24 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.418 04:29:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.418 ************************************ 00:05:35.418 END TEST json_config 00:05:35.418 ************************************ 00:05:35.418 04:29:24 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.418 04:29:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.418 04:29:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.418 04:29:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.418 ************************************ 00:05:35.418 START TEST json_config_extra_key 00:05:35.418 ************************************ 00:05:35.418 04:29:24 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.679 04:29:24 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.679 04:29:24 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.679 04:29:24 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.679 04:29:25 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:35.679 04:29:25 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.679 04:29:25 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.679 --rc genhtml_branch_coverage=1 00:05:35.679 --rc genhtml_function_coverage=1 00:05:35.679 --rc genhtml_legend=1 00:05:35.679 --rc geninfo_all_blocks=1 00:05:35.679 --rc geninfo_unexecuted_blocks=1 00:05:35.679 00:05:35.679 ' 00:05:35.679 04:29:25 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.679 --rc genhtml_branch_coverage=1 00:05:35.679 --rc genhtml_function_coverage=1 00:05:35.679 --rc genhtml_legend=1 00:05:35.679 --rc geninfo_all_blocks=1 00:05:35.679 --rc geninfo_unexecuted_blocks=1 00:05:35.679 00:05:35.679 ' 00:05:35.679 04:29:25 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.679 --rc genhtml_branch_coverage=1 00:05:35.679 --rc genhtml_function_coverage=1 00:05:35.679 --rc genhtml_legend=1 00:05:35.679 --rc geninfo_all_blocks=1 00:05:35.679 --rc geninfo_unexecuted_blocks=1 00:05:35.679 00:05:35.679 ' 00:05:35.679 04:29:25 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.679 --rc genhtml_branch_coverage=1 00:05:35.679 --rc genhtml_function_coverage=1 00:05:35.679 --rc genhtml_legend=1 00:05:35.679 --rc geninfo_all_blocks=1 00:05:35.679 --rc geninfo_unexecuted_blocks=1 00:05:35.679 00:05:35.679 ' 00:05:35.679 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ba16f2d3-e337-44f2-8c24-0537a184f995 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ba16f2d3-e337-44f2-8c24-0537a184f995 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.679 04:29:25 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.679 04:29:25 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.679 04:29:25 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.679 04:29:25 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.679 04:29:25 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:35.679 04:29:25 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.679 04:29:25 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.679 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.679 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.679 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.679 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.679 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.679 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.679 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.679 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:35.680 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.680 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.680 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.680 INFO: launching applications... 00:05:35.680 04:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.680 04:29:25 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.680 04:29:25 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.680 04:29:25 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.680 04:29:25 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.680 04:29:25 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.680 04:29:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.680 04:29:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.680 04:29:25 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58833 00:05:35.680 04:29:25 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.680 Waiting for target to run... 00:05:35.680 04:29:25 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58833 /var/tmp/spdk_tgt.sock 00:05:35.680 04:29:25 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.680 04:29:25 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58833 ']' 00:05:35.680 04:29:25 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.680 04:29:25 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.680 04:29:25 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.680 04:29:25 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.680 04:29:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.938 [2024-10-15 04:29:25.239314] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:35.938 [2024-10-15 04:29:25.239591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58833 ] 00:05:36.197 [2024-10-15 04:29:25.628437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.456 [2024-10-15 04:29:25.733577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.022 00:05:37.022 INFO: shutting down applications... 00:05:37.022 04:29:26 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.022 04:29:26 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:37.022 04:29:26 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:37.022 04:29:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:37.022 04:29:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:37.022 04:29:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:37.022 04:29:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:37.022 04:29:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58833 ]] 00:05:37.022 04:29:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58833 00:05:37.022 04:29:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:37.022 04:29:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.022 04:29:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58833 00:05:37.022 04:29:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.588 04:29:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.588 04:29:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.588 04:29:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58833 00:05:37.588 04:29:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.156 04:29:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.156 04:29:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.156 04:29:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58833 00:05:38.156 04:29:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.723 04:29:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.723 04:29:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.723 04:29:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58833 00:05:38.723 04:29:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.291 04:29:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.291 04:29:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.291 04:29:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58833 00:05:39.291 04:29:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.550 04:29:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.550 04:29:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.550 04:29:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58833 00:05:39.550 04:29:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.119 04:29:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.119 04:29:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.119 04:29:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58833 00:05:40.119 04:29:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.119 04:29:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:40.119 SPDK target shutdown done 00:05:40.119 Success 00:05:40.119 04:29:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.119 04:29:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.119 04:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:40.119 00:05:40.119 real 0m4.648s 00:05:40.119 user 0m4.101s 00:05:40.119 sys 0m0.621s 00:05:40.119 ************************************ 00:05:40.119 END TEST json_config_extra_key 00:05:40.119 ************************************ 00:05:40.119 04:29:29 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.119 04:29:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.119 04:29:29 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.119 04:29:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.119 04:29:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.119 04:29:29 -- common/autotest_common.sh@10 -- # set +x 00:05:40.119 ************************************ 00:05:40.119 START TEST alias_rpc 00:05:40.119 ************************************ 00:05:40.119 04:29:29 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.379 * Looking for test storage... 00:05:40.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:40.379 04:29:29 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:40.379 04:29:29 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:40.379 04:29:29 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:40.379 04:29:29 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.379 04:29:29 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.380 04:29:29 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:40.380 04:29:29 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.380 04:29:29 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:40.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.380 --rc genhtml_branch_coverage=1 00:05:40.380 --rc genhtml_function_coverage=1 00:05:40.380 --rc genhtml_legend=1 00:05:40.380 --rc geninfo_all_blocks=1 00:05:40.380 --rc geninfo_unexecuted_blocks=1 00:05:40.380 00:05:40.380 ' 00:05:40.380 04:29:29 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:40.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.380 --rc genhtml_branch_coverage=1 00:05:40.380 --rc genhtml_function_coverage=1 00:05:40.380 --rc genhtml_legend=1 00:05:40.380 --rc geninfo_all_blocks=1 00:05:40.380 --rc geninfo_unexecuted_blocks=1 00:05:40.380 00:05:40.380 ' 00:05:40.380 04:29:29 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:40.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.380 --rc genhtml_branch_coverage=1 00:05:40.380 --rc genhtml_function_coverage=1 00:05:40.380 --rc genhtml_legend=1 00:05:40.380 --rc geninfo_all_blocks=1 00:05:40.380 --rc geninfo_unexecuted_blocks=1 00:05:40.380 00:05:40.380 ' 00:05:40.380 04:29:29 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:40.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.380 --rc genhtml_branch_coverage=1 00:05:40.380 --rc genhtml_function_coverage=1 00:05:40.380 --rc genhtml_legend=1 00:05:40.380 --rc geninfo_all_blocks=1 00:05:40.380 --rc geninfo_unexecuted_blocks=1 00:05:40.380 00:05:40.380 ' 00:05:40.380 04:29:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.380 04:29:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58944 00:05:40.380 04:29:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.380 04:29:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58944 00:05:40.380 04:29:29 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58944 ']' 00:05:40.380 04:29:29 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.380 04:29:29 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.380 04:29:29 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.380 04:29:29 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.380 04:29:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.639 [2024-10-15 04:29:29.951171] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:40.639 [2024-10-15 04:29:29.951488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58944 ] 00:05:40.639 [2024-10-15 04:29:30.123764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.898 [2024-10-15 04:29:30.236854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.837 04:29:31 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.837 04:29:31 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:41.837 04:29:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:41.837 04:29:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58944 00:05:41.837 04:29:31 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58944 ']' 00:05:41.837 04:29:31 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58944 00:05:41.837 04:29:31 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:41.837 04:29:31 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.837 04:29:31 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58944 00:05:42.107 killing process with pid 58944 00:05:42.107 04:29:31 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.107 04:29:31 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.107 04:29:31 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58944' 00:05:42.107 04:29:31 alias_rpc -- common/autotest_common.sh@969 -- # kill 58944 00:05:42.107 04:29:31 alias_rpc -- common/autotest_common.sh@974 -- # wait 58944 00:05:44.646 ************************************ 00:05:44.646 END TEST alias_rpc 00:05:44.646 ************************************ 00:05:44.646 00:05:44.646 real 0m4.177s 00:05:44.646 user 0m4.134s 00:05:44.646 sys 0m0.623s 00:05:44.646 04:29:33 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.646 04:29:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.646 04:29:33 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:44.646 04:29:33 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:44.646 04:29:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.646 04:29:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.646 04:29:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.646 ************************************ 00:05:44.646 START TEST spdkcli_tcp 00:05:44.646 ************************************ 00:05:44.646 04:29:33 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:44.646 * Looking for test storage... 00:05:44.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:44.646 04:29:34 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.646 04:29:34 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.646 04:29:34 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.646 04:29:34 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.646 04:29:34 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:44.646 04:29:34 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.646 04:29:34 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.646 --rc genhtml_branch_coverage=1 00:05:44.646 --rc genhtml_function_coverage=1 00:05:44.646 --rc genhtml_legend=1 00:05:44.646 --rc geninfo_all_blocks=1 00:05:44.646 --rc geninfo_unexecuted_blocks=1 00:05:44.646 00:05:44.646 ' 00:05:44.646 04:29:34 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.646 --rc genhtml_branch_coverage=1 00:05:44.646 --rc genhtml_function_coverage=1 00:05:44.646 --rc genhtml_legend=1 00:05:44.646 --rc geninfo_all_blocks=1 00:05:44.646 --rc geninfo_unexecuted_blocks=1 00:05:44.646 00:05:44.646 ' 00:05:44.646 04:29:34 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.646 --rc genhtml_branch_coverage=1 00:05:44.646 --rc genhtml_function_coverage=1 00:05:44.646 --rc genhtml_legend=1 00:05:44.646 --rc geninfo_all_blocks=1 00:05:44.646 --rc geninfo_unexecuted_blocks=1 00:05:44.646 00:05:44.646 ' 00:05:44.646 04:29:34 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.646 --rc genhtml_branch_coverage=1 00:05:44.646 --rc genhtml_function_coverage=1 00:05:44.646 --rc genhtml_legend=1 00:05:44.646 --rc geninfo_all_blocks=1 00:05:44.646 --rc geninfo_unexecuted_blocks=1 00:05:44.646 00:05:44.646 ' 00:05:44.646 04:29:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:44.646 04:29:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:44.646 04:29:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:44.646 04:29:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.646 04:29:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.647 04:29:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.647 04:29:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.647 04:29:34 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:44.647 04:29:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.647 04:29:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.647 04:29:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59051 00:05:44.647 04:29:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59051 00:05:44.647 04:29:34 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59051 ']' 00:05:44.647 04:29:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.647 04:29:34 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.647 04:29:34 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.647 04:29:34 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.647 04:29:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.907 [2024-10-15 04:29:34.238960] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:44.907 [2024-10-15 04:29:34.239495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59051 ] 00:05:45.167 [2024-10-15 04:29:34.413666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.167 [2024-10-15 04:29:34.531095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.167 [2024-10-15 04:29:34.531129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.106 04:29:35 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.106 04:29:35 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:46.106 04:29:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59074 00:05:46.106 04:29:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:46.106 04:29:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:46.106 [ 00:05:46.106 "bdev_malloc_delete", 00:05:46.106 "bdev_malloc_create", 00:05:46.106 "bdev_null_resize", 00:05:46.106 "bdev_null_delete", 00:05:46.106 "bdev_null_create", 00:05:46.106 "bdev_nvme_cuse_unregister", 00:05:46.106 "bdev_nvme_cuse_register", 00:05:46.106 "bdev_opal_new_user", 00:05:46.106 "bdev_opal_set_lock_state", 00:05:46.106 "bdev_opal_delete", 00:05:46.106 "bdev_opal_get_info", 00:05:46.106 "bdev_opal_create", 00:05:46.106 "bdev_nvme_opal_revert", 00:05:46.106 "bdev_nvme_opal_init", 00:05:46.106 "bdev_nvme_send_cmd", 00:05:46.106 "bdev_nvme_set_keys", 00:05:46.106 "bdev_nvme_get_path_iostat", 00:05:46.106 "bdev_nvme_get_mdns_discovery_info", 00:05:46.106 "bdev_nvme_stop_mdns_discovery", 00:05:46.106 "bdev_nvme_start_mdns_discovery", 00:05:46.106 "bdev_nvme_set_multipath_policy", 00:05:46.106 "bdev_nvme_set_preferred_path", 00:05:46.106 "bdev_nvme_get_io_paths", 00:05:46.106 "bdev_nvme_remove_error_injection", 00:05:46.106 "bdev_nvme_add_error_injection", 00:05:46.106 "bdev_nvme_get_discovery_info", 00:05:46.106 "bdev_nvme_stop_discovery", 00:05:46.106 "bdev_nvme_start_discovery", 00:05:46.106 "bdev_nvme_get_controller_health_info", 00:05:46.106 "bdev_nvme_disable_controller", 00:05:46.106 "bdev_nvme_enable_controller", 00:05:46.106 "bdev_nvme_reset_controller", 00:05:46.106 "bdev_nvme_get_transport_statistics", 00:05:46.106 "bdev_nvme_apply_firmware", 00:05:46.106 "bdev_nvme_detach_controller", 00:05:46.106 "bdev_nvme_get_controllers", 00:05:46.106 "bdev_nvme_attach_controller", 00:05:46.106 "bdev_nvme_set_hotplug", 00:05:46.106 "bdev_nvme_set_options", 00:05:46.106 "bdev_passthru_delete", 00:05:46.106 "bdev_passthru_create", 00:05:46.106 "bdev_lvol_set_parent_bdev", 00:05:46.106 "bdev_lvol_set_parent", 00:05:46.106 "bdev_lvol_check_shallow_copy", 00:05:46.106 "bdev_lvol_start_shallow_copy", 00:05:46.106 "bdev_lvol_grow_lvstore", 00:05:46.106 "bdev_lvol_get_lvols", 00:05:46.106 "bdev_lvol_get_lvstores", 00:05:46.106 "bdev_lvol_delete", 00:05:46.106 "bdev_lvol_set_read_only", 00:05:46.106 "bdev_lvol_resize", 00:05:46.106 "bdev_lvol_decouple_parent", 00:05:46.106 "bdev_lvol_inflate", 00:05:46.106 "bdev_lvol_rename", 00:05:46.106 "bdev_lvol_clone_bdev", 00:05:46.106 "bdev_lvol_clone", 00:05:46.106 "bdev_lvol_snapshot", 00:05:46.106 "bdev_lvol_create", 00:05:46.106 "bdev_lvol_delete_lvstore", 00:05:46.106 "bdev_lvol_rename_lvstore", 00:05:46.106 "bdev_lvol_create_lvstore", 00:05:46.107 "bdev_raid_set_options", 00:05:46.107 "bdev_raid_remove_base_bdev", 00:05:46.107 "bdev_raid_add_base_bdev", 00:05:46.107 "bdev_raid_delete", 00:05:46.107 "bdev_raid_create", 00:05:46.107 "bdev_raid_get_bdevs", 00:05:46.107 "bdev_error_inject_error", 00:05:46.107 "bdev_error_delete", 00:05:46.107 "bdev_error_create", 00:05:46.107 "bdev_split_delete", 00:05:46.107 "bdev_split_create", 00:05:46.107 "bdev_delay_delete", 00:05:46.107 "bdev_delay_create", 00:05:46.107 "bdev_delay_update_latency", 00:05:46.107 "bdev_zone_block_delete", 00:05:46.107 "bdev_zone_block_create", 00:05:46.107 "blobfs_create", 00:05:46.107 "blobfs_detect", 00:05:46.107 "blobfs_set_cache_size", 00:05:46.107 "bdev_xnvme_delete", 00:05:46.107 "bdev_xnvme_create", 00:05:46.107 "bdev_aio_delete", 00:05:46.107 "bdev_aio_rescan", 00:05:46.107 "bdev_aio_create", 00:05:46.107 "bdev_ftl_set_property", 00:05:46.107 "bdev_ftl_get_properties", 00:05:46.107 "bdev_ftl_get_stats", 00:05:46.107 "bdev_ftl_unmap", 00:05:46.107 "bdev_ftl_unload", 00:05:46.107 "bdev_ftl_delete", 00:05:46.107 "bdev_ftl_load", 00:05:46.107 "bdev_ftl_create", 00:05:46.107 "bdev_virtio_attach_controller", 00:05:46.107 "bdev_virtio_scsi_get_devices", 00:05:46.107 "bdev_virtio_detach_controller", 00:05:46.107 "bdev_virtio_blk_set_hotplug", 00:05:46.107 "bdev_iscsi_delete", 00:05:46.107 "bdev_iscsi_create", 00:05:46.107 "bdev_iscsi_set_options", 00:05:46.107 "accel_error_inject_error", 00:05:46.107 "ioat_scan_accel_module", 00:05:46.107 "dsa_scan_accel_module", 00:05:46.107 "iaa_scan_accel_module", 00:05:46.107 "keyring_file_remove_key", 00:05:46.107 "keyring_file_add_key", 00:05:46.107 "keyring_linux_set_options", 00:05:46.107 "fsdev_aio_delete", 00:05:46.107 "fsdev_aio_create", 00:05:46.107 "iscsi_get_histogram", 00:05:46.107 "iscsi_enable_histogram", 00:05:46.107 "iscsi_set_options", 00:05:46.107 "iscsi_get_auth_groups", 00:05:46.107 "iscsi_auth_group_remove_secret", 00:05:46.107 "iscsi_auth_group_add_secret", 00:05:46.107 "iscsi_delete_auth_group", 00:05:46.107 "iscsi_create_auth_group", 00:05:46.107 "iscsi_set_discovery_auth", 00:05:46.107 "iscsi_get_options", 00:05:46.107 "iscsi_target_node_request_logout", 00:05:46.107 "iscsi_target_node_set_redirect", 00:05:46.107 "iscsi_target_node_set_auth", 00:05:46.107 "iscsi_target_node_add_lun", 00:05:46.107 "iscsi_get_stats", 00:05:46.107 "iscsi_get_connections", 00:05:46.107 "iscsi_portal_group_set_auth", 00:05:46.107 "iscsi_start_portal_group", 00:05:46.107 "iscsi_delete_portal_group", 00:05:46.107 "iscsi_create_portal_group", 00:05:46.107 "iscsi_get_portal_groups", 00:05:46.107 "iscsi_delete_target_node", 00:05:46.107 "iscsi_target_node_remove_pg_ig_maps", 00:05:46.107 "iscsi_target_node_add_pg_ig_maps", 00:05:46.107 "iscsi_create_target_node", 00:05:46.107 "iscsi_get_target_nodes", 00:05:46.107 "iscsi_delete_initiator_group", 00:05:46.107 "iscsi_initiator_group_remove_initiators", 00:05:46.107 "iscsi_initiator_group_add_initiators", 00:05:46.107 "iscsi_create_initiator_group", 00:05:46.107 "iscsi_get_initiator_groups", 00:05:46.107 "nvmf_set_crdt", 00:05:46.107 "nvmf_set_config", 00:05:46.107 "nvmf_set_max_subsystems", 00:05:46.107 "nvmf_stop_mdns_prr", 00:05:46.107 "nvmf_publish_mdns_prr", 00:05:46.107 "nvmf_subsystem_get_listeners", 00:05:46.107 "nvmf_subsystem_get_qpairs", 00:05:46.107 "nvmf_subsystem_get_controllers", 00:05:46.107 "nvmf_get_stats", 00:05:46.107 "nvmf_get_transports", 00:05:46.107 "nvmf_create_transport", 00:05:46.107 "nvmf_get_targets", 00:05:46.107 "nvmf_delete_target", 00:05:46.107 "nvmf_create_target", 00:05:46.107 "nvmf_subsystem_allow_any_host", 00:05:46.107 "nvmf_subsystem_set_keys", 00:05:46.107 "nvmf_subsystem_remove_host", 00:05:46.107 "nvmf_subsystem_add_host", 00:05:46.107 "nvmf_ns_remove_host", 00:05:46.107 "nvmf_ns_add_host", 00:05:46.107 "nvmf_subsystem_remove_ns", 00:05:46.107 "nvmf_subsystem_set_ns_ana_group", 00:05:46.107 "nvmf_subsystem_add_ns", 00:05:46.107 "nvmf_subsystem_listener_set_ana_state", 00:05:46.107 "nvmf_discovery_get_referrals", 00:05:46.107 "nvmf_discovery_remove_referral", 00:05:46.107 "nvmf_discovery_add_referral", 00:05:46.107 "nvmf_subsystem_remove_listener", 00:05:46.107 "nvmf_subsystem_add_listener", 00:05:46.107 "nvmf_delete_subsystem", 00:05:46.107 "nvmf_create_subsystem", 00:05:46.107 "nvmf_get_subsystems", 00:05:46.107 "env_dpdk_get_mem_stats", 00:05:46.107 "nbd_get_disks", 00:05:46.107 "nbd_stop_disk", 00:05:46.107 "nbd_start_disk", 00:05:46.107 "ublk_recover_disk", 00:05:46.107 "ublk_get_disks", 00:05:46.107 "ublk_stop_disk", 00:05:46.107 "ublk_start_disk", 00:05:46.107 "ublk_destroy_target", 00:05:46.107 "ublk_create_target", 00:05:46.107 "virtio_blk_create_transport", 00:05:46.107 "virtio_blk_get_transports", 00:05:46.107 "vhost_controller_set_coalescing", 00:05:46.107 "vhost_get_controllers", 00:05:46.107 "vhost_delete_controller", 00:05:46.107 "vhost_create_blk_controller", 00:05:46.107 "vhost_scsi_controller_remove_target", 00:05:46.108 "vhost_scsi_controller_add_target", 00:05:46.108 "vhost_start_scsi_controller", 00:05:46.108 "vhost_create_scsi_controller", 00:05:46.108 "thread_set_cpumask", 00:05:46.108 "scheduler_set_options", 00:05:46.108 "framework_get_governor", 00:05:46.108 "framework_get_scheduler", 00:05:46.108 "framework_set_scheduler", 00:05:46.108 "framework_get_reactors", 00:05:46.108 "thread_get_io_channels", 00:05:46.108 "thread_get_pollers", 00:05:46.108 "thread_get_stats", 00:05:46.108 "framework_monitor_context_switch", 00:05:46.108 "spdk_kill_instance", 00:05:46.108 "log_enable_timestamps", 00:05:46.108 "log_get_flags", 00:05:46.108 "log_clear_flag", 00:05:46.108 "log_set_flag", 00:05:46.108 "log_get_level", 00:05:46.108 "log_set_level", 00:05:46.108 "log_get_print_level", 00:05:46.108 "log_set_print_level", 00:05:46.108 "framework_enable_cpumask_locks", 00:05:46.108 "framework_disable_cpumask_locks", 00:05:46.108 "framework_wait_init", 00:05:46.108 "framework_start_init", 00:05:46.108 "scsi_get_devices", 00:05:46.108 "bdev_get_histogram", 00:05:46.108 "bdev_enable_histogram", 00:05:46.108 "bdev_set_qos_limit", 00:05:46.108 "bdev_set_qd_sampling_period", 00:05:46.108 "bdev_get_bdevs", 00:05:46.108 "bdev_reset_iostat", 00:05:46.108 "bdev_get_iostat", 00:05:46.108 "bdev_examine", 00:05:46.108 "bdev_wait_for_examine", 00:05:46.108 "bdev_set_options", 00:05:46.108 "accel_get_stats", 00:05:46.108 "accel_set_options", 00:05:46.108 "accel_set_driver", 00:05:46.108 "accel_crypto_key_destroy", 00:05:46.108 "accel_crypto_keys_get", 00:05:46.108 "accel_crypto_key_create", 00:05:46.108 "accel_assign_opc", 00:05:46.108 "accel_get_module_info", 00:05:46.108 "accel_get_opc_assignments", 00:05:46.108 "vmd_rescan", 00:05:46.108 "vmd_remove_device", 00:05:46.108 "vmd_enable", 00:05:46.108 "sock_get_default_impl", 00:05:46.108 "sock_set_default_impl", 00:05:46.108 "sock_impl_set_options", 00:05:46.108 "sock_impl_get_options", 00:05:46.108 "iobuf_get_stats", 00:05:46.108 "iobuf_set_options", 00:05:46.108 "keyring_get_keys", 00:05:46.108 "framework_get_pci_devices", 00:05:46.108 "framework_get_config", 00:05:46.108 "framework_get_subsystems", 00:05:46.108 "fsdev_set_opts", 00:05:46.108 "fsdev_get_opts", 00:05:46.108 "trace_get_info", 00:05:46.108 "trace_get_tpoint_group_mask", 00:05:46.108 "trace_disable_tpoint_group", 00:05:46.108 "trace_enable_tpoint_group", 00:05:46.108 "trace_clear_tpoint_mask", 00:05:46.108 "trace_set_tpoint_mask", 00:05:46.108 "notify_get_notifications", 00:05:46.108 "notify_get_types", 00:05:46.108 "spdk_get_version", 00:05:46.108 "rpc_get_methods" 00:05:46.108 ] 00:05:46.368 04:29:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.368 04:29:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:46.368 04:29:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59051 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59051 ']' 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59051 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59051 00:05:46.368 killing process with pid 59051 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59051' 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59051 00:05:46.368 04:29:35 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59051 00:05:48.902 ************************************ 00:05:48.902 END TEST spdkcli_tcp 00:05:48.902 ************************************ 00:05:48.902 00:05:48.902 real 0m4.256s 00:05:48.902 user 0m7.490s 00:05:48.902 sys 0m0.678s 00:05:48.902 04:29:38 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.902 04:29:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.902 04:29:38 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.902 04:29:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.902 04:29:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.902 04:29:38 -- common/autotest_common.sh@10 -- # set +x 00:05:48.902 ************************************ 00:05:48.902 START TEST dpdk_mem_utility 00:05:48.902 ************************************ 00:05:48.902 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.902 * Looking for test storage... 00:05:48.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:48.902 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:48.902 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:48.902 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:48.902 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.902 04:29:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:49.162 04:29:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:49.162 04:29:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.162 04:29:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:49.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.162 04:29:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.162 04:29:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.162 04:29:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.162 04:29:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:49.162 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.162 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:49.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.162 --rc genhtml_branch_coverage=1 00:05:49.162 --rc genhtml_function_coverage=1 00:05:49.162 --rc genhtml_legend=1 00:05:49.162 --rc geninfo_all_blocks=1 00:05:49.162 --rc geninfo_unexecuted_blocks=1 00:05:49.162 00:05:49.162 ' 00:05:49.162 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:49.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.162 --rc genhtml_branch_coverage=1 00:05:49.162 --rc genhtml_function_coverage=1 00:05:49.162 --rc genhtml_legend=1 00:05:49.162 --rc geninfo_all_blocks=1 00:05:49.162 --rc geninfo_unexecuted_blocks=1 00:05:49.162 00:05:49.162 ' 00:05:49.162 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:49.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.162 --rc genhtml_branch_coverage=1 00:05:49.162 --rc genhtml_function_coverage=1 00:05:49.162 --rc genhtml_legend=1 00:05:49.162 --rc geninfo_all_blocks=1 00:05:49.162 --rc geninfo_unexecuted_blocks=1 00:05:49.162 00:05:49.162 ' 00:05:49.162 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:49.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.162 --rc genhtml_branch_coverage=1 00:05:49.162 --rc genhtml_function_coverage=1 00:05:49.162 --rc genhtml_legend=1 00:05:49.162 --rc geninfo_all_blocks=1 00:05:49.162 --rc geninfo_unexecuted_blocks=1 00:05:49.162 00:05:49.162 ' 00:05:49.162 04:29:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:49.162 04:29:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59179 00:05:49.162 04:29:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59179 00:05:49.162 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59179 ']' 00:05:49.162 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.162 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.162 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.163 04:29:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.163 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.163 04:29:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.163 [2024-10-15 04:29:38.524585] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:49.163 [2024-10-15 04:29:38.524913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59179 ] 00:05:49.520 [2024-10-15 04:29:38.694730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.520 [2024-10-15 04:29:38.813736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.459 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.459 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:50.459 04:29:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:50.459 04:29:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:50.459 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.459 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.459 { 00:05:50.459 "filename": "/tmp/spdk_mem_dump.txt" 00:05:50.459 } 00:05:50.459 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.459 04:29:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:50.459 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:50.459 1 heaps totaling size 824.000000 MiB 00:05:50.459 size: 824.000000 MiB heap id: 0 00:05:50.459 end heaps---------- 00:05:50.459 9 mempools totaling size 603.782043 MiB 00:05:50.459 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:50.459 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:50.459 size: 100.555481 MiB name: bdev_io_59179 00:05:50.459 size: 50.003479 MiB name: msgpool_59179 00:05:50.459 size: 36.509338 MiB name: fsdev_io_59179 00:05:50.459 size: 21.763794 MiB name: PDU_Pool 00:05:50.459 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:50.459 size: 4.133484 MiB name: evtpool_59179 00:05:50.459 size: 0.026123 MiB name: Session_Pool 00:05:50.459 end mempools------- 00:05:50.459 6 memzones totaling size 4.142822 MiB 00:05:50.459 size: 1.000366 MiB name: RG_ring_0_59179 00:05:50.459 size: 1.000366 MiB name: RG_ring_1_59179 00:05:50.459 size: 1.000366 MiB name: RG_ring_4_59179 00:05:50.459 size: 1.000366 MiB name: RG_ring_5_59179 00:05:50.459 size: 0.125366 MiB name: RG_ring_2_59179 00:05:50.459 size: 0.015991 MiB name: RG_ring_3_59179 00:05:50.459 end memzones------- 00:05:50.459 04:29:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:50.459 heap id: 0 total size: 824.000000 MiB number of busy elements: 312 number of free elements: 18 00:05:50.459 list of free elements. size: 16.782104 MiB 00:05:50.459 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:50.459 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:50.459 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:50.459 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:50.460 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:50.460 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:50.460 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:50.460 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:50.460 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:50.460 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:50.460 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:50.460 element at address: 0x20001b400000 with size: 0.563660 MiB 00:05:50.460 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:50.460 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:50.460 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:50.460 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:50.460 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:50.460 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:50.460 list of standard malloc elements. size: 199.286987 MiB 00:05:50.460 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:50.460 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:50.460 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:50.460 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:50.460 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:50.460 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:50.460 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:50.460 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:50.460 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:50.460 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:50.460 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:50.460 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:50.460 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:50.460 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:50.461 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:50.461 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:50.461 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:50.461 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:50.461 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:50.462 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:50.462 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:50.462 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:50.462 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:50.462 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:50.462 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:50.462 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:50.462 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:50.462 list of memzone associated elements. size: 607.930908 MiB 00:05:50.462 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:50.462 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:50.462 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:50.462 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:50.462 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:50.462 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59179_0 00:05:50.462 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:50.462 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59179_0 00:05:50.462 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:50.462 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59179_0 00:05:50.462 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:50.462 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:50.462 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:50.462 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:50.462 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:50.462 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59179_0 00:05:50.462 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:50.462 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59179 00:05:50.462 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:50.462 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59179 00:05:50.462 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:50.462 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:50.462 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:50.462 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:50.462 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:50.462 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:50.462 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:50.462 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:50.462 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:50.462 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59179 00:05:50.462 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:50.462 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59179 00:05:50.462 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:50.462 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59179 00:05:50.462 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:50.462 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59179 00:05:50.462 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:50.462 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59179 00:05:50.462 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:50.462 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59179 00:05:50.462 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:50.462 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:50.462 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:50.462 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:50.462 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:50.462 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:50.462 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:50.462 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59179 00:05:50.462 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:50.462 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59179 00:05:50.462 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:50.462 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:50.462 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:50.462 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:50.462 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:50.462 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59179 00:05:50.462 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:50.462 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:50.462 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:50.462 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59179 00:05:50.462 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:50.462 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59179 00:05:50.462 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:50.462 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59179 00:05:50.462 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:50.462 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:50.462 04:29:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:50.462 04:29:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59179 00:05:50.462 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59179 ']' 00:05:50.462 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59179 00:05:50.462 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:50.462 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.462 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59179 00:05:50.462 killing process with pid 59179 00:05:50.462 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.462 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.462 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59179' 00:05:50.462 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59179 00:05:50.462 04:29:39 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59179 00:05:52.994 00:05:52.994 real 0m4.103s 00:05:52.994 user 0m3.960s 00:05:52.994 sys 0m0.634s 00:05:52.994 04:29:42 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.994 04:29:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.994 ************************************ 00:05:52.994 END TEST dpdk_mem_utility 00:05:52.994 ************************************ 00:05:52.994 04:29:42 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:52.994 04:29:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.994 04:29:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.994 04:29:42 -- common/autotest_common.sh@10 -- # set +x 00:05:52.994 ************************************ 00:05:52.994 START TEST event 00:05:52.994 ************************************ 00:05:52.994 04:29:42 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:52.994 * Looking for test storage... 00:05:52.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:52.994 04:29:42 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:52.994 04:29:42 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:52.994 04:29:42 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.253 04:29:42 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.253 04:29:42 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.253 04:29:42 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.253 04:29:42 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.253 04:29:42 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.253 04:29:42 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.253 04:29:42 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.253 04:29:42 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.253 04:29:42 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.253 04:29:42 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.253 04:29:42 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.253 04:29:42 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.253 04:29:42 event -- scripts/common.sh@344 -- # case "$op" in 00:05:53.253 04:29:42 event -- scripts/common.sh@345 -- # : 1 00:05:53.253 04:29:42 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.253 04:29:42 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.253 04:29:42 event -- scripts/common.sh@365 -- # decimal 1 00:05:53.253 04:29:42 event -- scripts/common.sh@353 -- # local d=1 00:05:53.253 04:29:42 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.253 04:29:42 event -- scripts/common.sh@355 -- # echo 1 00:05:53.253 04:29:42 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.253 04:29:42 event -- scripts/common.sh@366 -- # decimal 2 00:05:53.253 04:29:42 event -- scripts/common.sh@353 -- # local d=2 00:05:53.253 04:29:42 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.253 04:29:42 event -- scripts/common.sh@355 -- # echo 2 00:05:53.253 04:29:42 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.253 04:29:42 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.253 04:29:42 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.253 04:29:42 event -- scripts/common.sh@368 -- # return 0 00:05:53.253 04:29:42 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.253 04:29:42 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.253 --rc genhtml_branch_coverage=1 00:05:53.253 --rc genhtml_function_coverage=1 00:05:53.253 --rc genhtml_legend=1 00:05:53.253 --rc geninfo_all_blocks=1 00:05:53.253 --rc geninfo_unexecuted_blocks=1 00:05:53.253 00:05:53.253 ' 00:05:53.253 04:29:42 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.253 --rc genhtml_branch_coverage=1 00:05:53.253 --rc genhtml_function_coverage=1 00:05:53.253 --rc genhtml_legend=1 00:05:53.253 --rc geninfo_all_blocks=1 00:05:53.253 --rc geninfo_unexecuted_blocks=1 00:05:53.253 00:05:53.253 ' 00:05:53.253 04:29:42 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.253 --rc genhtml_branch_coverage=1 00:05:53.253 --rc genhtml_function_coverage=1 00:05:53.253 --rc genhtml_legend=1 00:05:53.253 --rc geninfo_all_blocks=1 00:05:53.253 --rc geninfo_unexecuted_blocks=1 00:05:53.253 00:05:53.253 ' 00:05:53.253 04:29:42 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.253 --rc genhtml_branch_coverage=1 00:05:53.253 --rc genhtml_function_coverage=1 00:05:53.253 --rc genhtml_legend=1 00:05:53.253 --rc geninfo_all_blocks=1 00:05:53.253 --rc geninfo_unexecuted_blocks=1 00:05:53.253 00:05:53.253 ' 00:05:53.253 04:29:42 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:53.253 04:29:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.253 04:29:42 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.253 04:29:42 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:53.253 04:29:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.253 04:29:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.253 ************************************ 00:05:53.253 START TEST event_perf 00:05:53.254 ************************************ 00:05:53.254 04:29:42 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.254 Running I/O for 1 seconds...[2024-10-15 04:29:42.655365] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:53.254 [2024-10-15 04:29:42.655598] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59287 ] 00:05:53.512 [2024-10-15 04:29:42.829982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.512 [2024-10-15 04:29:42.956167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.512 [2024-10-15 04:29:42.956345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.512 [2024-10-15 04:29:42.956499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.512 [2024-10-15 04:29:42.956531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.890 Running I/O for 1 seconds... 00:05:54.890 lcore 0: 188999 00:05:54.890 lcore 1: 189000 00:05:54.890 lcore 2: 188999 00:05:54.890 lcore 3: 189000 00:05:54.890 done. 00:05:54.890 00:05:54.890 real 0m1.600s 00:05:54.890 user 0m4.351s 00:05:54.890 sys 0m0.125s 00:05:54.890 ************************************ 00:05:54.890 END TEST event_perf 00:05:54.890 ************************************ 00:05:54.890 04:29:44 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.890 04:29:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.890 04:29:44 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.890 04:29:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:54.890 04:29:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.890 04:29:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.890 ************************************ 00:05:54.890 START TEST event_reactor 00:05:54.890 ************************************ 00:05:54.890 04:29:44 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.890 [2024-10-15 04:29:44.318032] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:54.891 [2024-10-15 04:29:44.318356] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59326 ] 00:05:55.150 [2024-10-15 04:29:44.490880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.150 [2024-10-15 04:29:44.610171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.556 test_start 00:05:56.556 oneshot 00:05:56.556 tick 100 00:05:56.556 tick 100 00:05:56.556 tick 250 00:05:56.556 tick 100 00:05:56.556 tick 100 00:05:56.556 tick 100 00:05:56.556 tick 250 00:05:56.556 tick 500 00:05:56.556 tick 100 00:05:56.556 tick 100 00:05:56.556 tick 250 00:05:56.556 tick 100 00:05:56.556 tick 100 00:05:56.556 test_end 00:05:56.556 ************************************ 00:05:56.556 END TEST event_reactor 00:05:56.556 ************************************ 00:05:56.556 00:05:56.556 real 0m1.575s 00:05:56.556 user 0m1.350s 00:05:56.556 sys 0m0.115s 00:05:56.556 04:29:45 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.556 04:29:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:56.556 04:29:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.556 04:29:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:56.556 04:29:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.556 04:29:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.556 ************************************ 00:05:56.556 START TEST event_reactor_perf 00:05:56.556 ************************************ 00:05:56.556 04:29:45 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.556 [2024-10-15 04:29:45.943620] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:56.556 [2024-10-15 04:29:45.943759] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59363 ] 00:05:56.815 [2024-10-15 04:29:46.116327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.815 [2024-10-15 04:29:46.235185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.194 test_start 00:05:58.194 test_end 00:05:58.194 Performance: 372185 events per second 00:05:58.194 00:05:58.194 real 0m1.551s 00:05:58.194 user 0m1.364s 00:05:58.194 sys 0m0.078s 00:05:58.194 04:29:47 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.194 04:29:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.194 ************************************ 00:05:58.194 END TEST event_reactor_perf 00:05:58.194 ************************************ 00:05:58.194 04:29:47 event -- event/event.sh@49 -- # uname -s 00:05:58.194 04:29:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:58.194 04:29:47 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:58.194 04:29:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.194 04:29:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.194 04:29:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.194 ************************************ 00:05:58.194 START TEST event_scheduler 00:05:58.194 ************************************ 00:05:58.194 04:29:47 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:58.194 * Looking for test storage... 00:05:58.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:58.194 04:29:47 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:58.194 04:29:47 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:58.194 04:29:47 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:58.453 04:29:47 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:58.453 04:29:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.454 04:29:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:58.454 04:29:47 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.454 04:29:47 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:58.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.454 --rc genhtml_branch_coverage=1 00:05:58.454 --rc genhtml_function_coverage=1 00:05:58.454 --rc genhtml_legend=1 00:05:58.454 --rc geninfo_all_blocks=1 00:05:58.454 --rc geninfo_unexecuted_blocks=1 00:05:58.454 00:05:58.454 ' 00:05:58.454 04:29:47 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:58.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.454 --rc genhtml_branch_coverage=1 00:05:58.454 --rc genhtml_function_coverage=1 00:05:58.454 --rc genhtml_legend=1 00:05:58.454 --rc geninfo_all_blocks=1 00:05:58.454 --rc geninfo_unexecuted_blocks=1 00:05:58.454 00:05:58.454 ' 00:05:58.454 04:29:47 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:58.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.454 --rc genhtml_branch_coverage=1 00:05:58.454 --rc genhtml_function_coverage=1 00:05:58.454 --rc genhtml_legend=1 00:05:58.454 --rc geninfo_all_blocks=1 00:05:58.454 --rc geninfo_unexecuted_blocks=1 00:05:58.454 00:05:58.454 ' 00:05:58.454 04:29:47 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:58.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.454 --rc genhtml_branch_coverage=1 00:05:58.454 --rc genhtml_function_coverage=1 00:05:58.454 --rc genhtml_legend=1 00:05:58.454 --rc geninfo_all_blocks=1 00:05:58.454 --rc geninfo_unexecuted_blocks=1 00:05:58.454 00:05:58.454 ' 00:05:58.454 04:29:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:58.454 04:29:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59439 00:05:58.454 04:29:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:58.454 04:29:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.454 04:29:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59439 00:05:58.454 04:29:47 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 59439 ']' 00:05:58.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.454 04:29:47 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.454 04:29:47 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.454 04:29:47 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.454 04:29:47 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.454 04:29:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.454 [2024-10-15 04:29:47.853377] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:05:58.454 [2024-10-15 04:29:47.853518] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59439 ] 00:05:58.711 [2024-10-15 04:29:48.036936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.711 [2024-10-15 04:29:48.165731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.711 [2024-10-15 04:29:48.165938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.711 [2024-10-15 04:29:48.166127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.711 [2024-10-15 04:29:48.166159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.279 04:29:48 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.279 04:29:48 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:59.279 04:29:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:59.279 04:29:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.279 04:29:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.279 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.279 POWER: Cannot set governor of lcore 0 to userspace 00:05:59.280 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.280 POWER: Cannot set governor of lcore 0 to performance 00:05:59.280 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.280 POWER: Cannot set governor of lcore 0 to userspace 00:05:59.280 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.280 POWER: Cannot set governor of lcore 0 to userspace 00:05:59.280 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:59.280 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:59.280 POWER: Unable to set Power Management Environment for lcore 0 00:05:59.280 [2024-10-15 04:29:48.711256] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:59.280 [2024-10-15 04:29:48.711283] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:59.280 [2024-10-15 04:29:48.711299] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:59.280 [2024-10-15 04:29:48.711319] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:59.280 [2024-10-15 04:29:48.711330] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:59.280 [2024-10-15 04:29:48.711342] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:59.280 04:29:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.280 04:29:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:59.280 04:29:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.280 04:29:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.912 [2024-10-15 04:29:49.060043] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:59.912 04:29:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.912 04:29:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:59.912 04:29:49 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.912 04:29:49 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.912 04:29:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.912 ************************************ 00:05:59.912 START TEST scheduler_create_thread 00:05:59.912 ************************************ 00:05:59.912 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:59.912 04:29:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:59.912 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.913 2 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.913 3 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.913 4 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.913 5 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.913 6 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.913 7 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.913 8 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.913 9 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.913 10 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.913 04:29:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.850 04:29:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.850 04:29:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:00.850 04:29:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:00.850 04:29:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.850 04:29:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.417 04:29:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.417 04:29:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:01.417 04:29:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.417 04:29:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.794 04:29:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.794 04:29:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:02.794 04:29:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:02.794 04:29:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.794 04:29:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.361 04:29:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.361 ************************************ 00:06:03.361 END TEST scheduler_create_thread 00:06:03.361 ************************************ 00:06:03.361 00:06:03.361 real 0m3.556s 00:06:03.361 user 0m0.021s 00:06:03.361 sys 0m0.010s 00:06:03.361 04:29:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.361 04:29:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.361 04:29:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:03.361 04:29:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59439 00:06:03.361 04:29:52 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 59439 ']' 00:06:03.361 04:29:52 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 59439 00:06:03.361 04:29:52 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:03.361 04:29:52 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.361 04:29:52 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59439 00:06:03.361 04:29:52 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:03.361 killing process with pid 59439 00:06:03.361 04:29:52 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:03.361 04:29:52 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59439' 00:06:03.361 04:29:52 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 59439 00:06:03.361 04:29:52 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 59439 00:06:03.620 [2024-10-15 04:29:53.009713] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:04.997 ************************************ 00:06:04.997 END TEST event_scheduler 00:06:04.997 ************************************ 00:06:04.997 00:06:04.997 real 0m6.697s 00:06:04.997 user 0m12.611s 00:06:04.997 sys 0m0.534s 00:06:04.997 04:29:54 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.997 04:29:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.997 04:29:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:04.997 04:29:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:04.997 04:29:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.997 04:29:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.997 04:29:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.997 ************************************ 00:06:04.997 START TEST app_repeat 00:06:04.997 ************************************ 00:06:04.997 04:29:54 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59556 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59556' 00:06:04.997 Process app_repeat pid: 59556 00:06:04.997 spdk_app_start Round 0 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:04.997 04:29:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59556 /var/tmp/spdk-nbd.sock 00:06:04.997 04:29:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59556 ']' 00:06:04.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.997 04:29:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.997 04:29:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.997 04:29:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.997 04:29:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.997 04:29:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.997 [2024-10-15 04:29:54.392493] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:06:04.997 [2024-10-15 04:29:54.392631] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59556 ] 00:06:05.257 [2024-10-15 04:29:54.567662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.257 [2024-10-15 04:29:54.688706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.257 [2024-10-15 04:29:54.688737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.824 04:29:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.824 04:29:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:05.824 04:29:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.082 Malloc0 00:06:06.340 04:29:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.598 Malloc1 00:06:06.598 04:29:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.598 04:29:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.857 /dev/nbd0 00:06:06.857 04:29:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.857 04:29:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.857 1+0 records in 00:06:06.857 1+0 records out 00:06:06.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653798 s, 6.3 MB/s 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.857 04:29:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.857 04:29:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.857 04:29:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.857 04:29:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.169 /dev/nbd1 00:06:07.169 04:29:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.169 04:29:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.169 1+0 records in 00:06:07.169 1+0 records out 00:06:07.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353649 s, 11.6 MB/s 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:07.169 04:29:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:07.169 04:29:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.169 04:29:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.169 04:29:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.169 04:29:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.169 04:29:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.430 { 00:06:07.430 "nbd_device": "/dev/nbd0", 00:06:07.430 "bdev_name": "Malloc0" 00:06:07.430 }, 00:06:07.430 { 00:06:07.430 "nbd_device": "/dev/nbd1", 00:06:07.430 "bdev_name": "Malloc1" 00:06:07.430 } 00:06:07.430 ]' 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.430 { 00:06:07.430 "nbd_device": "/dev/nbd0", 00:06:07.430 "bdev_name": "Malloc0" 00:06:07.430 }, 00:06:07.430 { 00:06:07.430 "nbd_device": "/dev/nbd1", 00:06:07.430 "bdev_name": "Malloc1" 00:06:07.430 } 00:06:07.430 ]' 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.430 /dev/nbd1' 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.430 /dev/nbd1' 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.430 04:29:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.431 256+0 records in 00:06:07.431 256+0 records out 00:06:07.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104058 s, 101 MB/s 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.431 256+0 records in 00:06:07.431 256+0 records out 00:06:07.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316646 s, 33.1 MB/s 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.431 256+0 records in 00:06:07.431 256+0 records out 00:06:07.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0374821 s, 28.0 MB/s 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.431 04:29:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.689 04:29:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.689 04:29:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.689 04:29:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.689 04:29:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.689 04:29:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.689 04:29:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.689 04:29:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.689 04:29:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.689 04:29:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.689 04:29:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.948 04:29:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.948 04:29:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.948 04:29:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.948 04:29:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.948 04:29:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.948 04:29:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.948 04:29:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.948 04:29:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.948 04:29:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.948 04:29:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.948 04:29:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.206 04:29:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.206 04:29:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.773 04:29:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.778 [2024-10-15 04:29:59.225908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.037 [2024-10-15 04:29:59.336937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.037 [2024-10-15 04:29:59.336938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.037 [2024-10-15 04:29:59.530868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.037 [2024-10-15 04:29:59.531000] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.939 spdk_app_start Round 1 00:06:11.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.939 04:30:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.939 04:30:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:11.939 04:30:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59556 /var/tmp/spdk-nbd.sock 00:06:11.939 04:30:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59556 ']' 00:06:11.939 04:30:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.939 04:30:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.939 04:30:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.939 04:30:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.939 04:30:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.939 04:30:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.939 04:30:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:11.939 04:30:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.196 Malloc0 00:06:12.196 04:30:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.455 Malloc1 00:06:12.455 04:30:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.455 04:30:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.715 /dev/nbd0 00:06:12.715 04:30:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.715 04:30:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.715 04:30:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:12.715 04:30:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:12.715 04:30:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.715 04:30:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.715 04:30:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:12.715 04:30:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:12.715 04:30:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.715 04:30:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.715 04:30:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.715 1+0 records in 00:06:12.715 1+0 records out 00:06:12.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430102 s, 9.5 MB/s 00:06:12.715 04:30:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.973 04:30:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:12.973 04:30:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.973 04:30:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.973 04:30:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:12.973 04:30:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.973 04:30:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.973 04:30:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.973 /dev/nbd1 00:06:13.232 04:30:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.232 04:30:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.232 1+0 records in 00:06:13.232 1+0 records out 00:06:13.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463187 s, 8.8 MB/s 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:13.232 04:30:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:13.232 04:30:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.232 04:30:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.232 04:30:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.232 04:30:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.232 04:30:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.491 04:30:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.491 { 00:06:13.491 "nbd_device": "/dev/nbd0", 00:06:13.491 "bdev_name": "Malloc0" 00:06:13.491 }, 00:06:13.491 { 00:06:13.491 "nbd_device": "/dev/nbd1", 00:06:13.491 "bdev_name": "Malloc1" 00:06:13.491 } 00:06:13.491 ]' 00:06:13.491 04:30:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.491 { 00:06:13.491 "nbd_device": "/dev/nbd0", 00:06:13.491 "bdev_name": "Malloc0" 00:06:13.491 }, 00:06:13.491 { 00:06:13.491 "nbd_device": "/dev/nbd1", 00:06:13.491 "bdev_name": "Malloc1" 00:06:13.491 } 00:06:13.491 ]' 00:06:13.491 04:30:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.491 04:30:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.491 /dev/nbd1' 00:06:13.491 04:30:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.491 /dev/nbd1' 00:06:13.491 04:30:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.491 04:30:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.491 04:30:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.492 256+0 records in 00:06:13.492 256+0 records out 00:06:13.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138755 s, 75.6 MB/s 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.492 256+0 records in 00:06:13.492 256+0 records out 00:06:13.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031295 s, 33.5 MB/s 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.492 256+0 records in 00:06:13.492 256+0 records out 00:06:13.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0369046 s, 28.4 MB/s 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.492 04:30:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.750 04:30:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.750 04:30:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.750 04:30:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.750 04:30:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.750 04:30:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.750 04:30:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.750 04:30:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.750 04:30:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.750 04:30:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.750 04:30:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.010 04:30:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.010 04:30:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.010 04:30:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.010 04:30:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.010 04:30:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.010 04:30:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.010 04:30:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.010 04:30:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.010 04:30:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.010 04:30:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.010 04:30:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.268 04:30:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.268 04:30:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.835 04:30:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.213 [2024-10-15 04:30:05.373434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.213 [2024-10-15 04:30:05.493451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.213 [2024-10-15 04:30:05.493472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.213 [2024-10-15 04:30:05.695702] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.213 [2024-10-15 04:30:05.696081] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.121 04:30:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.121 04:30:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:18.121 spdk_app_start Round 2 00:06:18.121 04:30:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59556 /var/tmp/spdk-nbd.sock 00:06:18.121 04:30:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59556 ']' 00:06:18.121 04:30:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.121 04:30:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.121 04:30:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.121 04:30:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.121 04:30:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.121 04:30:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.121 04:30:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:18.121 04:30:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.380 Malloc0 00:06:18.380 04:30:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.637 Malloc1 00:06:18.637 04:30:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.637 04:30:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.896 /dev/nbd0 00:06:18.896 04:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.896 04:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.896 1+0 records in 00:06:18.896 1+0 records out 00:06:18.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327985 s, 12.5 MB/s 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:18.896 04:30:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:18.896 04:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.896 04:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.896 04:30:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.155 /dev/nbd1 00:06:19.155 04:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.155 04:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.155 1+0 records in 00:06:19.155 1+0 records out 00:06:19.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331639 s, 12.4 MB/s 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.155 04:30:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:19.155 04:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.155 04:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.155 04:30:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.155 04:30:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.155 04:30:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.415 04:30:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.415 { 00:06:19.415 "nbd_device": "/dev/nbd0", 00:06:19.415 "bdev_name": "Malloc0" 00:06:19.415 }, 00:06:19.415 { 00:06:19.415 "nbd_device": "/dev/nbd1", 00:06:19.415 "bdev_name": "Malloc1" 00:06:19.415 } 00:06:19.415 ]' 00:06:19.415 04:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.415 { 00:06:19.415 "nbd_device": "/dev/nbd0", 00:06:19.415 "bdev_name": "Malloc0" 00:06:19.415 }, 00:06:19.415 { 00:06:19.415 "nbd_device": "/dev/nbd1", 00:06:19.415 "bdev_name": "Malloc1" 00:06:19.415 } 00:06:19.415 ]' 00:06:19.415 04:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.673 /dev/nbd1' 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.673 /dev/nbd1' 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.673 256+0 records in 00:06:19.673 256+0 records out 00:06:19.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175182 s, 59.9 MB/s 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.673 04:30:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.673 256+0 records in 00:06:19.673 256+0 records out 00:06:19.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330726 s, 31.7 MB/s 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.673 256+0 records in 00:06:19.673 256+0 records out 00:06:19.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02692 s, 39.0 MB/s 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.673 04:30:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.972 04:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.972 04:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.972 04:30:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.972 04:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.972 04:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.972 04:30:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.972 04:30:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.972 04:30:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.972 04:30:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.972 04:30:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.232 04:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.232 04:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.232 04:30:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.232 04:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.232 04:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.232 04:30:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.232 04:30:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.232 04:30:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.232 04:30:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.232 04:30:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.232 04:30:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.491 04:30:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.491 04:30:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.057 04:30:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.436 [2024-10-15 04:30:11.604651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.436 [2024-10-15 04:30:11.718624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.436 [2024-10-15 04:30:11.718626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.436 [2024-10-15 04:30:11.917467] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.436 [2024-10-15 04:30:11.917573] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.386 04:30:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59556 /var/tmp/spdk-nbd.sock 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59556 ']' 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:24.386 04:30:13 event.app_repeat -- event/event.sh@39 -- # killprocess 59556 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 59556 ']' 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 59556 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59556 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.386 killing process with pid 59556 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59556' 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@969 -- # kill 59556 00:06:24.386 04:30:13 event.app_repeat -- common/autotest_common.sh@974 -- # wait 59556 00:06:25.323 spdk_app_start is called in Round 0. 00:06:25.323 Shutdown signal received, stop current app iteration 00:06:25.323 Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 reinitialization... 00:06:25.323 spdk_app_start is called in Round 1. 00:06:25.323 Shutdown signal received, stop current app iteration 00:06:25.323 Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 reinitialization... 00:06:25.323 spdk_app_start is called in Round 2. 00:06:25.323 Shutdown signal received, stop current app iteration 00:06:25.323 Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 reinitialization... 00:06:25.323 spdk_app_start is called in Round 3. 00:06:25.323 Shutdown signal received, stop current app iteration 00:06:25.582 04:30:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:25.582 04:30:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:25.582 00:06:25.582 real 0m20.513s 00:06:25.582 user 0m44.046s 00:06:25.582 sys 0m3.450s 00:06:25.582 04:30:14 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.582 04:30:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.582 ************************************ 00:06:25.582 END TEST app_repeat 00:06:25.582 ************************************ 00:06:25.582 04:30:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:25.582 04:30:14 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:25.582 04:30:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.582 04:30:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.583 04:30:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.583 ************************************ 00:06:25.583 START TEST cpu_locks 00:06:25.583 ************************************ 00:06:25.583 04:30:14 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:25.583 * Looking for test storage... 00:06:25.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:25.583 04:30:15 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:25.583 04:30:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:25.583 04:30:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:25.583 04:30:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.583 04:30:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:25.842 04:30:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.842 04:30:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:25.842 04:30:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:25.842 04:30:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.842 04:30:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:25.842 04:30:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.842 04:30:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.842 04:30:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.842 04:30:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:25.842 04:30:15 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.842 04:30:15 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:25.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.842 --rc genhtml_branch_coverage=1 00:06:25.842 --rc genhtml_function_coverage=1 00:06:25.842 --rc genhtml_legend=1 00:06:25.842 --rc geninfo_all_blocks=1 00:06:25.842 --rc geninfo_unexecuted_blocks=1 00:06:25.842 00:06:25.842 ' 00:06:25.842 04:30:15 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:25.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.842 --rc genhtml_branch_coverage=1 00:06:25.842 --rc genhtml_function_coverage=1 00:06:25.842 --rc genhtml_legend=1 00:06:25.842 --rc geninfo_all_blocks=1 00:06:25.842 --rc geninfo_unexecuted_blocks=1 00:06:25.842 00:06:25.842 ' 00:06:25.842 04:30:15 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:25.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.842 --rc genhtml_branch_coverage=1 00:06:25.842 --rc genhtml_function_coverage=1 00:06:25.842 --rc genhtml_legend=1 00:06:25.842 --rc geninfo_all_blocks=1 00:06:25.842 --rc geninfo_unexecuted_blocks=1 00:06:25.842 00:06:25.842 ' 00:06:25.842 04:30:15 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:25.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.842 --rc genhtml_branch_coverage=1 00:06:25.842 --rc genhtml_function_coverage=1 00:06:25.842 --rc genhtml_legend=1 00:06:25.842 --rc geninfo_all_blocks=1 00:06:25.842 --rc geninfo_unexecuted_blocks=1 00:06:25.842 00:06:25.842 ' 00:06:25.842 04:30:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:25.842 04:30:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:25.842 04:30:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:25.842 04:30:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:25.842 04:30:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.842 04:30:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.842 04:30:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.842 ************************************ 00:06:25.842 START TEST default_locks 00:06:25.842 ************************************ 00:06:25.842 04:30:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:25.843 04:30:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.843 04:30:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60014 00:06:25.843 04:30:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60014 00:06:25.843 04:30:15 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60014 ']' 00:06:25.843 04:30:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.843 04:30:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.843 04:30:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.843 04:30:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.843 04:30:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.843 [2024-10-15 04:30:15.227882] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:06:25.843 [2024-10-15 04:30:15.228067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60014 ] 00:06:26.102 [2024-10-15 04:30:15.409599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.102 [2024-10-15 04:30:15.523728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.103 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.103 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:27.103 04:30:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60014 00:06:27.103 04:30:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.103 04:30:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60014 00:06:27.670 04:30:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60014 00:06:27.670 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60014 ']' 00:06:27.670 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60014 00:06:27.670 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:27.670 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.670 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60014 00:06:27.670 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.670 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.670 killing process with pid 60014 00:06:27.670 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60014' 00:06:27.670 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60014 00:06:27.670 04:30:16 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60014 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60014 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60014 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60014 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60014 ']' 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.222 ERROR: process (pid: 60014) is no longer running 00:06:30.222 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60014) - No such process 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.222 00:06:30.222 real 0m4.319s 00:06:30.222 user 0m4.241s 00:06:30.222 sys 0m0.748s 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.222 ************************************ 00:06:30.222 END TEST default_locks 00:06:30.222 04:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.222 ************************************ 00:06:30.222 04:30:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:30.222 04:30:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.222 04:30:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.222 04:30:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.222 ************************************ 00:06:30.222 START TEST default_locks_via_rpc 00:06:30.222 ************************************ 00:06:30.223 04:30:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:30.223 04:30:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60097 00:06:30.223 04:30:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.223 04:30:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60097 00:06:30.223 04:30:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60097 ']' 00:06:30.223 04:30:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.223 04:30:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.223 04:30:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.223 04:30:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.223 04:30:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.223 [2024-10-15 04:30:19.606315] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:06:30.223 [2024-10-15 04:30:19.606475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60097 ] 00:06:30.482 [2024-10-15 04:30:19.779841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.482 [2024-10-15 04:30:19.896032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60097 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60097 00:06:31.423 04:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.990 04:30:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60097 00:06:31.990 04:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60097 ']' 00:06:31.990 04:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60097 00:06:31.990 04:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:31.990 04:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.990 04:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60097 00:06:31.990 04:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.990 killing process with pid 60097 00:06:31.990 04:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.990 04:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60097' 00:06:31.990 04:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60097 00:06:31.990 04:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60097 00:06:34.530 00:06:34.530 real 0m4.323s 00:06:34.530 user 0m4.311s 00:06:34.530 sys 0m0.727s 00:06:34.530 ************************************ 00:06:34.530 END TEST default_locks_via_rpc 00:06:34.530 ************************************ 00:06:34.530 04:30:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.530 04:30:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.530 04:30:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:34.530 04:30:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.530 04:30:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.530 04:30:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.530 ************************************ 00:06:34.530 START TEST non_locking_app_on_locked_coremask 00:06:34.530 ************************************ 00:06:34.530 04:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:34.530 04:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60171 00:06:34.530 04:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.530 04:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60171 /var/tmp/spdk.sock 00:06:34.530 04:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60171 ']' 00:06:34.530 04:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.530 04:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.530 04:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.530 04:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.530 04:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.530 [2024-10-15 04:30:24.000604] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:06:34.530 [2024-10-15 04:30:24.000743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60171 ] 00:06:34.790 [2024-10-15 04:30:24.171895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.790 [2024-10-15 04:30:24.282806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.730 04:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.730 04:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:35.730 04:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60187 00:06:35.730 04:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60187 /var/tmp/spdk2.sock 00:06:35.730 04:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:35.730 04:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60187 ']' 00:06:35.730 04:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.730 04:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.730 04:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.730 04:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.730 04:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.988 [2024-10-15 04:30:25.277108] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:06:35.988 [2024-10-15 04:30:25.277472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60187 ] 00:06:35.988 [2024-10-15 04:30:25.445990] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.988 [2024-10-15 04:30:25.446042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.248 [2024-10-15 04:30:25.689424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.779 04:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.779 04:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:38.779 04:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60171 00:06:38.779 04:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60171 00:06:38.779 04:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.346 04:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60171 00:06:39.346 04:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60171 ']' 00:06:39.346 04:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60171 00:06:39.346 04:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:39.346 04:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.346 04:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60171 00:06:39.346 killing process with pid 60171 00:06:39.346 04:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.346 04:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.346 04:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60171' 00:06:39.346 04:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60171 00:06:39.346 04:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60171 00:06:44.617 04:30:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60187 00:06:44.617 04:30:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60187 ']' 00:06:44.617 04:30:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60187 00:06:44.617 04:30:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:44.617 04:30:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.617 04:30:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60187 00:06:44.617 killing process with pid 60187 00:06:44.617 04:30:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.617 04:30:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.617 04:30:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60187' 00:06:44.617 04:30:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60187 00:06:44.617 04:30:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60187 00:06:47.186 ************************************ 00:06:47.186 END TEST non_locking_app_on_locked_coremask 00:06:47.186 ************************************ 00:06:47.186 00:06:47.186 real 0m12.274s 00:06:47.186 user 0m12.667s 00:06:47.186 sys 0m1.442s 00:06:47.186 04:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.186 04:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.186 04:30:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:47.186 04:30:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.186 04:30:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.186 04:30:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.186 ************************************ 00:06:47.186 START TEST locking_app_on_unlocked_coremask 00:06:47.186 ************************************ 00:06:47.186 04:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:47.186 04:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60349 00:06:47.186 04:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:47.186 04:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60349 /var/tmp/spdk.sock 00:06:47.186 04:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60349 ']' 00:06:47.186 04:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.186 04:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.186 04:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.186 04:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.186 04:30:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.186 [2024-10-15 04:30:36.354140] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:06:47.186 [2024-10-15 04:30:36.354271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60349 ] 00:06:47.186 [2024-10-15 04:30:36.527263] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.186 [2024-10-15 04:30:36.527379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.186 [2024-10-15 04:30:36.649195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.126 04:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.126 04:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:48.126 04:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:48.126 04:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60365 00:06:48.126 04:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60365 /var/tmp/spdk2.sock 00:06:48.126 04:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60365 ']' 00:06:48.126 04:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.126 04:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.127 04:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.127 04:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.127 04:30:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.385 [2024-10-15 04:30:37.666698] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:06:48.385 [2024-10-15 04:30:37.667116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60365 ] 00:06:48.385 [2024-10-15 04:30:37.838981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.642 [2024-10-15 04:30:38.072084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.282 04:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.282 04:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:51.282 04:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60365 00:06:51.282 04:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60365 00:06:51.282 04:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.849 04:30:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60349 00:06:51.849 04:30:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60349 ']' 00:06:51.849 04:30:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60349 00:06:51.849 04:30:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.849 04:30:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.849 04:30:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60349 00:06:51.849 killing process with pid 60349 00:06:51.849 04:30:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.849 04:30:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.849 04:30:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60349' 00:06:51.849 04:30:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60349 00:06:51.849 04:30:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60349 00:06:57.123 04:30:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60365 00:06:57.123 04:30:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60365 ']' 00:06:57.123 04:30:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60365 00:06:57.123 04:30:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:57.123 04:30:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.123 04:30:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60365 00:06:57.123 killing process with pid 60365 00:06:57.123 04:30:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.123 04:30:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.123 04:30:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60365' 00:06:57.123 04:30:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60365 00:06:57.123 04:30:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60365 00:06:59.029 ************************************ 00:06:59.029 END TEST locking_app_on_unlocked_coremask 00:06:59.029 ************************************ 00:06:59.029 00:06:59.029 real 0m12.178s 00:06:59.029 user 0m12.510s 00:06:59.029 sys 0m1.430s 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.029 04:30:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:59.029 04:30:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.029 04:30:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.029 04:30:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.029 ************************************ 00:06:59.029 START TEST locking_app_on_locked_coremask 00:06:59.029 ************************************ 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60519 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60519 /var/tmp/spdk.sock 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60519 ']' 00:06:59.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.029 04:30:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.289 [2024-10-15 04:30:48.597099] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:06:59.289 [2024-10-15 04:30:48.597237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60519 ] 00:06:59.289 [2024-10-15 04:30:48.766709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.548 [2024-10-15 04:30:48.881608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60535 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60535 /var/tmp/spdk2.sock 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60535 /var/tmp/spdk2.sock 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60535 /var/tmp/spdk2.sock 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60535 ']' 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.487 04:30:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.487 [2024-10-15 04:30:49.838722] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:00.487 [2024-10-15 04:30:49.839403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60535 ] 00:07:00.746 [2024-10-15 04:30:50.007740] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60519 has claimed it. 00:07:00.746 [2024-10-15 04:30:50.007808] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.005 ERROR: process (pid: 60535) is no longer running 00:07:01.005 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60535) - No such process 00:07:01.005 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.005 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:01.005 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:01.005 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.005 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.005 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.005 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60519 00:07:01.005 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60519 00:07:01.005 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.573 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60519 00:07:01.573 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60519 ']' 00:07:01.574 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60519 00:07:01.574 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:01.574 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.574 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60519 00:07:01.574 killing process with pid 60519 00:07:01.574 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.574 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.574 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60519' 00:07:01.574 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60519 00:07:01.574 04:30:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60519 00:07:04.109 00:07:04.109 real 0m4.926s 00:07:04.109 user 0m5.058s 00:07:04.109 sys 0m0.866s 00:07:04.109 04:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.109 ************************************ 00:07:04.109 END TEST locking_app_on_locked_coremask 00:07:04.109 ************************************ 00:07:04.109 04:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.109 04:30:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:04.109 04:30:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.109 04:30:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.109 04:30:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.109 ************************************ 00:07:04.109 START TEST locking_overlapped_coremask 00:07:04.109 ************************************ 00:07:04.109 04:30:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:04.109 04:30:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60606 00:07:04.109 04:30:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:04.109 04:30:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60606 /var/tmp/spdk.sock 00:07:04.109 04:30:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60606 ']' 00:07:04.109 04:30:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.109 04:30:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.109 04:30:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.109 04:30:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.109 04:30:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.109 [2024-10-15 04:30:53.603644] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:04.109 [2024-10-15 04:30:53.603794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60606 ] 00:07:04.368 [2024-10-15 04:30:53.778307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.627 [2024-10-15 04:30:53.900856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.627 [2024-10-15 04:30:53.900932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.627 [2024-10-15 04:30:53.900885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.569 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.569 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:05.569 04:30:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60628 00:07:05.569 04:30:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60628 /var/tmp/spdk2.sock 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60628 /var/tmp/spdk2.sock 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60628 /var/tmp/spdk2.sock 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60628 ']' 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.570 04:30:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.570 [2024-10-15 04:30:54.955117] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:05.570 [2024-10-15 04:30:54.955320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60628 ] 00:07:05.829 [2024-10-15 04:30:55.146998] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60606 has claimed it. 00:07:05.829 [2024-10-15 04:30:55.147078] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.398 ERROR: process (pid: 60628) is no longer running 00:07:06.398 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60628) - No such process 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60606 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60606 ']' 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60606 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60606 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60606' 00:07:06.398 killing process with pid 60606 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60606 00:07:06.398 04:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60606 00:07:08.933 00:07:08.933 real 0m4.698s 00:07:08.933 user 0m12.796s 00:07:08.933 sys 0m0.662s 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.933 ************************************ 00:07:08.933 END TEST locking_overlapped_coremask 00:07:08.933 ************************************ 00:07:08.933 04:30:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:08.933 04:30:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.933 04:30:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.933 04:30:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.933 ************************************ 00:07:08.933 START TEST locking_overlapped_coremask_via_rpc 00:07:08.933 ************************************ 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60692 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60692 /var/tmp/spdk.sock 00:07:08.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60692 ']' 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.933 04:30:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.933 [2024-10-15 04:30:58.374392] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:08.933 [2024-10-15 04:30:58.374530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60692 ] 00:07:09.192 [2024-10-15 04:30:58.544086] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.192 [2024-10-15 04:30:58.544180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.451 [2024-10-15 04:30:58.708628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.451 [2024-10-15 04:30:58.708773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.451 [2024-10-15 04:30:58.708785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.389 04:30:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.389 04:30:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:10.389 04:30:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:10.389 04:30:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60716 00:07:10.389 04:30:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60716 /var/tmp/spdk2.sock 00:07:10.389 04:30:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60716 ']' 00:07:10.389 04:30:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.389 04:30:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.389 04:30:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.389 04:30:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.389 04:30:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.389 [2024-10-15 04:30:59.772265] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:10.389 [2024-10-15 04:30:59.772669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60716 ] 00:07:10.648 [2024-10-15 04:30:59.947047] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.648 [2024-10-15 04:30:59.947112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.906 [2024-10-15 04:31:00.204366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.906 [2024-10-15 04:31:00.207992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.906 [2024-10-15 04:31:00.208023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.453 [2024-10-15 04:31:02.389018] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60692 has claimed it. 00:07:13.453 request: 00:07:13.453 { 00:07:13.453 "method": "framework_enable_cpumask_locks", 00:07:13.453 "req_id": 1 00:07:13.453 } 00:07:13.453 Got JSON-RPC error response 00:07:13.453 response: 00:07:13.453 { 00:07:13.453 "code": -32603, 00:07:13.453 "message": "Failed to claim CPU core: 2" 00:07:13.453 } 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60692 /var/tmp/spdk.sock 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60692 ']' 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60716 /var/tmp/spdk2.sock 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60716 ']' 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.453 ************************************ 00:07:13.453 END TEST locking_overlapped_coremask_via_rpc 00:07:13.453 ************************************ 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:13.453 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:13.454 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:13.454 00:07:13.454 real 0m4.644s 00:07:13.454 user 0m1.393s 00:07:13.454 sys 0m0.248s 00:07:13.454 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.454 04:31:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.454 04:31:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:13.454 04:31:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60692 ]] 00:07:13.712 04:31:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60692 00:07:13.712 04:31:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60692 ']' 00:07:13.712 04:31:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60692 00:07:13.712 04:31:02 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:13.712 04:31:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.712 04:31:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60692 00:07:13.712 04:31:03 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.712 killing process with pid 60692 00:07:13.712 04:31:03 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.712 04:31:03 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60692' 00:07:13.712 04:31:03 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60692 00:07:13.712 04:31:03 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60692 00:07:16.241 04:31:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60716 ]] 00:07:16.241 04:31:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60716 00:07:16.241 04:31:05 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60716 ']' 00:07:16.241 04:31:05 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60716 00:07:16.241 04:31:05 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:16.241 04:31:05 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.242 04:31:05 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60716 00:07:16.242 killing process with pid 60716 00:07:16.242 04:31:05 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:16.242 04:31:05 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:16.242 04:31:05 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60716' 00:07:16.242 04:31:05 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60716 00:07:16.242 04:31:05 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60716 00:07:18.775 04:31:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.775 Process with pid 60692 is not found 00:07:18.775 04:31:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:18.775 04:31:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60692 ]] 00:07:18.775 04:31:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60692 00:07:18.775 04:31:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60692 ']' 00:07:18.775 04:31:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60692 00:07:18.775 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60692) - No such process 00:07:18.775 04:31:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60692 is not found' 00:07:18.775 04:31:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60716 ]] 00:07:18.775 04:31:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60716 00:07:18.775 04:31:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60716 ']' 00:07:18.775 04:31:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60716 00:07:18.775 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60716) - No such process 00:07:18.775 Process with pid 60716 is not found 00:07:18.775 04:31:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60716 is not found' 00:07:18.775 04:31:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.775 ************************************ 00:07:18.775 END TEST cpu_locks 00:07:18.775 ************************************ 00:07:18.775 00:07:18.775 real 0m53.120s 00:07:18.775 user 1m30.458s 00:07:18.775 sys 0m7.400s 00:07:18.775 04:31:08 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.775 04:31:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.775 ************************************ 00:07:18.775 END TEST event 00:07:18.775 ************************************ 00:07:18.775 00:07:18.775 real 1m25.712s 00:07:18.775 user 2m34.445s 00:07:18.775 sys 0m12.101s 00:07:18.775 04:31:08 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.775 04:31:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.775 04:31:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:18.775 04:31:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.775 04:31:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.775 04:31:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.775 ************************************ 00:07:18.775 START TEST thread 00:07:18.775 ************************************ 00:07:18.775 04:31:08 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:18.775 * Looking for test storage... 00:07:18.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:18.775 04:31:08 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.775 04:31:08 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.775 04:31:08 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:19.034 04:31:08 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:19.034 04:31:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.034 04:31:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.034 04:31:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.034 04:31:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.034 04:31:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.034 04:31:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.034 04:31:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.034 04:31:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.034 04:31:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.034 04:31:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.034 04:31:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.034 04:31:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:19.034 04:31:08 thread -- scripts/common.sh@345 -- # : 1 00:07:19.034 04:31:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.034 04:31:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.034 04:31:08 thread -- scripts/common.sh@365 -- # decimal 1 00:07:19.034 04:31:08 thread -- scripts/common.sh@353 -- # local d=1 00:07:19.034 04:31:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.034 04:31:08 thread -- scripts/common.sh@355 -- # echo 1 00:07:19.034 04:31:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.034 04:31:08 thread -- scripts/common.sh@366 -- # decimal 2 00:07:19.034 04:31:08 thread -- scripts/common.sh@353 -- # local d=2 00:07:19.034 04:31:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.034 04:31:08 thread -- scripts/common.sh@355 -- # echo 2 00:07:19.034 04:31:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.034 04:31:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.034 04:31:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.034 04:31:08 thread -- scripts/common.sh@368 -- # return 0 00:07:19.034 04:31:08 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.034 04:31:08 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:19.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.034 --rc genhtml_branch_coverage=1 00:07:19.034 --rc genhtml_function_coverage=1 00:07:19.034 --rc genhtml_legend=1 00:07:19.034 --rc geninfo_all_blocks=1 00:07:19.034 --rc geninfo_unexecuted_blocks=1 00:07:19.034 00:07:19.034 ' 00:07:19.034 04:31:08 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:19.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.034 --rc genhtml_branch_coverage=1 00:07:19.034 --rc genhtml_function_coverage=1 00:07:19.034 --rc genhtml_legend=1 00:07:19.034 --rc geninfo_all_blocks=1 00:07:19.034 --rc geninfo_unexecuted_blocks=1 00:07:19.034 00:07:19.034 ' 00:07:19.034 04:31:08 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:19.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.034 --rc genhtml_branch_coverage=1 00:07:19.034 --rc genhtml_function_coverage=1 00:07:19.034 --rc genhtml_legend=1 00:07:19.034 --rc geninfo_all_blocks=1 00:07:19.034 --rc geninfo_unexecuted_blocks=1 00:07:19.034 00:07:19.034 ' 00:07:19.034 04:31:08 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:19.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.034 --rc genhtml_branch_coverage=1 00:07:19.034 --rc genhtml_function_coverage=1 00:07:19.034 --rc genhtml_legend=1 00:07:19.034 --rc geninfo_all_blocks=1 00:07:19.034 --rc geninfo_unexecuted_blocks=1 00:07:19.034 00:07:19.034 ' 00:07:19.034 04:31:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:19.034 04:31:08 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:19.034 04:31:08 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.034 04:31:08 thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.034 ************************************ 00:07:19.034 START TEST thread_poller_perf 00:07:19.034 ************************************ 00:07:19.034 04:31:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:19.034 [2024-10-15 04:31:08.448456] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:19.034 [2024-10-15 04:31:08.448735] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60916 ] 00:07:19.292 [2024-10-15 04:31:08.627760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.292 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:19.292 [2024-10-15 04:31:08.788384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.687 [2024-10-15T04:31:10.191Z] ====================================== 00:07:20.687 [2024-10-15T04:31:10.191Z] busy:2503497502 (cyc) 00:07:20.687 [2024-10-15T04:31:10.191Z] total_run_count: 383000 00:07:20.687 [2024-10-15T04:31:10.191Z] tsc_hz: 2490000000 (cyc) 00:07:20.687 [2024-10-15T04:31:10.191Z] ====================================== 00:07:20.687 [2024-10-15T04:31:10.191Z] poller_cost: 6536 (cyc), 2624 (nsec) 00:07:20.687 00:07:20.687 real 0m1.640s 00:07:20.687 user 0m1.409s 00:07:20.687 sys 0m0.120s 00:07:20.687 ************************************ 00:07:20.687 END TEST thread_poller_perf 00:07:20.687 ************************************ 00:07:20.687 04:31:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.687 04:31:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.687 04:31:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:20.687 04:31:10 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:20.687 04:31:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.687 04:31:10 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.687 ************************************ 00:07:20.687 START TEST thread_poller_perf 00:07:20.687 ************************************ 00:07:20.687 04:31:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:20.687 [2024-10-15 04:31:10.177530] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:20.687 [2024-10-15 04:31:10.177740] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60957 ] 00:07:20.949 [2024-10-15 04:31:10.367609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.207 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:21.207 [2024-10-15 04:31:10.508355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.583 [2024-10-15T04:31:12.087Z] ====================================== 00:07:22.583 [2024-10-15T04:31:12.087Z] busy:2495119988 (cyc) 00:07:22.583 [2024-10-15T04:31:12.087Z] total_run_count: 4901000 00:07:22.583 [2024-10-15T04:31:12.087Z] tsc_hz: 2490000000 (cyc) 00:07:22.583 [2024-10-15T04:31:12.087Z] ====================================== 00:07:22.583 [2024-10-15T04:31:12.087Z] poller_cost: 509 (cyc), 204 (nsec) 00:07:22.583 00:07:22.583 real 0m1.627s 00:07:22.583 user 0m1.391s 00:07:22.583 sys 0m0.127s 00:07:22.584 04:31:11 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.584 ************************************ 00:07:22.584 END TEST thread_poller_perf 00:07:22.584 ************************************ 00:07:22.584 04:31:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.584 04:31:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:22.584 ************************************ 00:07:22.584 END TEST thread 00:07:22.584 ************************************ 00:07:22.584 00:07:22.584 real 0m3.651s 00:07:22.584 user 0m2.992s 00:07:22.584 sys 0m0.447s 00:07:22.584 04:31:11 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.584 04:31:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.584 04:31:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:22.584 04:31:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.584 04:31:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.584 04:31:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.584 04:31:11 -- common/autotest_common.sh@10 -- # set +x 00:07:22.584 ************************************ 00:07:22.584 START TEST app_cmdline 00:07:22.584 ************************************ 00:07:22.584 04:31:11 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.584 * Looking for test storage... 00:07:22.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:22.584 04:31:11 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:22.584 04:31:11 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:22.584 04:31:11 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:22.584 04:31:12 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.584 04:31:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:22.843 04:31:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.843 04:31:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.843 04:31:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.843 04:31:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:22.843 04:31:12 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.843 04:31:12 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:22.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.843 --rc genhtml_branch_coverage=1 00:07:22.843 --rc genhtml_function_coverage=1 00:07:22.843 --rc genhtml_legend=1 00:07:22.843 --rc geninfo_all_blocks=1 00:07:22.843 --rc geninfo_unexecuted_blocks=1 00:07:22.843 00:07:22.843 ' 00:07:22.843 04:31:12 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:22.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.843 --rc genhtml_branch_coverage=1 00:07:22.843 --rc genhtml_function_coverage=1 00:07:22.843 --rc genhtml_legend=1 00:07:22.843 --rc geninfo_all_blocks=1 00:07:22.843 --rc geninfo_unexecuted_blocks=1 00:07:22.843 00:07:22.843 ' 00:07:22.843 04:31:12 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:22.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.843 --rc genhtml_branch_coverage=1 00:07:22.843 --rc genhtml_function_coverage=1 00:07:22.843 --rc genhtml_legend=1 00:07:22.843 --rc geninfo_all_blocks=1 00:07:22.843 --rc geninfo_unexecuted_blocks=1 00:07:22.843 00:07:22.843 ' 00:07:22.843 04:31:12 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:22.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.843 --rc genhtml_branch_coverage=1 00:07:22.843 --rc genhtml_function_coverage=1 00:07:22.843 --rc genhtml_legend=1 00:07:22.843 --rc geninfo_all_blocks=1 00:07:22.843 --rc geninfo_unexecuted_blocks=1 00:07:22.843 00:07:22.843 ' 00:07:22.843 04:31:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:22.843 04:31:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61042 00:07:22.843 04:31:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:22.843 04:31:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61042 00:07:22.843 04:31:12 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61042 ']' 00:07:22.843 04:31:12 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.843 04:31:12 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.843 04:31:12 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.843 04:31:12 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.843 04:31:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:22.843 [2024-10-15 04:31:12.195586] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:22.843 [2024-10-15 04:31:12.195920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61042 ] 00:07:23.102 [2024-10-15 04:31:12.368380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.102 [2024-10-15 04:31:12.488661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.038 04:31:13 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.038 04:31:13 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:24.038 04:31:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:24.356 { 00:07:24.356 "version": "SPDK v25.01-pre git sha1 1b0026227", 00:07:24.356 "fields": { 00:07:24.356 "major": 25, 00:07:24.356 "minor": 1, 00:07:24.356 "patch": 0, 00:07:24.356 "suffix": "-pre", 00:07:24.356 "commit": "1b0026227" 00:07:24.356 } 00:07:24.356 } 00:07:24.356 04:31:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:24.356 04:31:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:24.356 04:31:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:24.356 04:31:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:24.356 04:31:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:24.356 04:31:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:24.356 04:31:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.356 04:31:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:24.356 04:31:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:24.356 04:31:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:24.356 04:31:13 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.613 request: 00:07:24.613 { 00:07:24.613 "method": "env_dpdk_get_mem_stats", 00:07:24.613 "req_id": 1 00:07:24.613 } 00:07:24.613 Got JSON-RPC error response 00:07:24.613 response: 00:07:24.613 { 00:07:24.613 "code": -32601, 00:07:24.613 "message": "Method not found" 00:07:24.613 } 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.613 04:31:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61042 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61042 ']' 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61042 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61042 00:07:24.613 killing process with pid 61042 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61042' 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@969 -- # kill 61042 00:07:24.613 04:31:13 app_cmdline -- common/autotest_common.sh@974 -- # wait 61042 00:07:27.143 00:07:27.143 real 0m4.499s 00:07:27.143 user 0m4.707s 00:07:27.143 sys 0m0.644s 00:07:27.143 04:31:16 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.143 ************************************ 00:07:27.143 END TEST app_cmdline 00:07:27.143 ************************************ 00:07:27.143 04:31:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.143 04:31:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:27.143 04:31:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.143 04:31:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.143 04:31:16 -- common/autotest_common.sh@10 -- # set +x 00:07:27.143 ************************************ 00:07:27.143 START TEST version 00:07:27.143 ************************************ 00:07:27.143 04:31:16 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:27.143 * Looking for test storage... 00:07:27.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:27.143 04:31:16 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:27.143 04:31:16 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:27.143 04:31:16 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:27.143 04:31:16 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:27.143 04:31:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.143 04:31:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.143 04:31:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.143 04:31:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.143 04:31:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.143 04:31:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.143 04:31:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.143 04:31:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.143 04:31:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.143 04:31:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.143 04:31:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.402 04:31:16 version -- scripts/common.sh@344 -- # case "$op" in 00:07:27.402 04:31:16 version -- scripts/common.sh@345 -- # : 1 00:07:27.402 04:31:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.402 04:31:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.402 04:31:16 version -- scripts/common.sh@365 -- # decimal 1 00:07:27.402 04:31:16 version -- scripts/common.sh@353 -- # local d=1 00:07:27.402 04:31:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.402 04:31:16 version -- scripts/common.sh@355 -- # echo 1 00:07:27.402 04:31:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.402 04:31:16 version -- scripts/common.sh@366 -- # decimal 2 00:07:27.402 04:31:16 version -- scripts/common.sh@353 -- # local d=2 00:07:27.402 04:31:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.402 04:31:16 version -- scripts/common.sh@355 -- # echo 2 00:07:27.402 04:31:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.402 04:31:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.402 04:31:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.402 04:31:16 version -- scripts/common.sh@368 -- # return 0 00:07:27.402 04:31:16 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.402 04:31:16 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:27.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.402 --rc genhtml_branch_coverage=1 00:07:27.402 --rc genhtml_function_coverage=1 00:07:27.402 --rc genhtml_legend=1 00:07:27.402 --rc geninfo_all_blocks=1 00:07:27.402 --rc geninfo_unexecuted_blocks=1 00:07:27.402 00:07:27.402 ' 00:07:27.402 04:31:16 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:27.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.402 --rc genhtml_branch_coverage=1 00:07:27.402 --rc genhtml_function_coverage=1 00:07:27.402 --rc genhtml_legend=1 00:07:27.402 --rc geninfo_all_blocks=1 00:07:27.402 --rc geninfo_unexecuted_blocks=1 00:07:27.402 00:07:27.402 ' 00:07:27.402 04:31:16 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:27.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.402 --rc genhtml_branch_coverage=1 00:07:27.402 --rc genhtml_function_coverage=1 00:07:27.402 --rc genhtml_legend=1 00:07:27.402 --rc geninfo_all_blocks=1 00:07:27.402 --rc geninfo_unexecuted_blocks=1 00:07:27.402 00:07:27.402 ' 00:07:27.402 04:31:16 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:27.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.402 --rc genhtml_branch_coverage=1 00:07:27.402 --rc genhtml_function_coverage=1 00:07:27.402 --rc genhtml_legend=1 00:07:27.402 --rc geninfo_all_blocks=1 00:07:27.402 --rc geninfo_unexecuted_blocks=1 00:07:27.402 00:07:27.402 ' 00:07:27.402 04:31:16 version -- app/version.sh@17 -- # get_header_version major 00:07:27.402 04:31:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.402 04:31:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.402 04:31:16 version -- app/version.sh@14 -- # cut -f2 00:07:27.402 04:31:16 version -- app/version.sh@17 -- # major=25 00:07:27.402 04:31:16 version -- app/version.sh@18 -- # get_header_version minor 00:07:27.402 04:31:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.402 04:31:16 version -- app/version.sh@14 -- # cut -f2 00:07:27.402 04:31:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.402 04:31:16 version -- app/version.sh@18 -- # minor=1 00:07:27.402 04:31:16 version -- app/version.sh@19 -- # get_header_version patch 00:07:27.402 04:31:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.402 04:31:16 version -- app/version.sh@14 -- # cut -f2 00:07:27.402 04:31:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.402 04:31:16 version -- app/version.sh@19 -- # patch=0 00:07:27.402 04:31:16 version -- app/version.sh@20 -- # get_header_version suffix 00:07:27.402 04:31:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.402 04:31:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.402 04:31:16 version -- app/version.sh@14 -- # cut -f2 00:07:27.402 04:31:16 version -- app/version.sh@20 -- # suffix=-pre 00:07:27.402 04:31:16 version -- app/version.sh@22 -- # version=25.1 00:07:27.402 04:31:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:27.402 04:31:16 version -- app/version.sh@28 -- # version=25.1rc0 00:07:27.402 04:31:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:27.402 04:31:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:27.402 04:31:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:27.403 04:31:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:27.403 ************************************ 00:07:27.403 END TEST version 00:07:27.403 ************************************ 00:07:27.403 00:07:27.403 real 0m0.330s 00:07:27.403 user 0m0.194s 00:07:27.403 sys 0m0.196s 00:07:27.403 04:31:16 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.403 04:31:16 version -- common/autotest_common.sh@10 -- # set +x 00:07:27.403 04:31:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:27.403 04:31:16 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:27.403 04:31:16 -- spdk/autotest.sh@194 -- # uname -s 00:07:27.403 04:31:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:27.403 04:31:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:27.403 04:31:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:27.403 04:31:16 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:27.403 04:31:16 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:27.403 04:31:16 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:27.403 04:31:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.403 04:31:16 -- common/autotest_common.sh@10 -- # set +x 00:07:27.403 ************************************ 00:07:27.403 START TEST blockdev_nvme 00:07:27.403 ************************************ 00:07:27.403 04:31:16 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:27.662 * Looking for test storage... 00:07:27.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:27.662 04:31:16 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:27.662 04:31:16 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:27.662 04:31:16 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:27.662 04:31:17 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.662 04:31:17 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:27.662 04:31:17 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.662 04:31:17 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:27.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.662 --rc genhtml_branch_coverage=1 00:07:27.662 --rc genhtml_function_coverage=1 00:07:27.662 --rc genhtml_legend=1 00:07:27.662 --rc geninfo_all_blocks=1 00:07:27.662 --rc geninfo_unexecuted_blocks=1 00:07:27.662 00:07:27.662 ' 00:07:27.662 04:31:17 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:27.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.662 --rc genhtml_branch_coverage=1 00:07:27.662 --rc genhtml_function_coverage=1 00:07:27.662 --rc genhtml_legend=1 00:07:27.662 --rc geninfo_all_blocks=1 00:07:27.662 --rc geninfo_unexecuted_blocks=1 00:07:27.662 00:07:27.662 ' 00:07:27.662 04:31:17 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:27.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.662 --rc genhtml_branch_coverage=1 00:07:27.662 --rc genhtml_function_coverage=1 00:07:27.662 --rc genhtml_legend=1 00:07:27.662 --rc geninfo_all_blocks=1 00:07:27.662 --rc geninfo_unexecuted_blocks=1 00:07:27.662 00:07:27.662 ' 00:07:27.662 04:31:17 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:27.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.662 --rc genhtml_branch_coverage=1 00:07:27.662 --rc genhtml_function_coverage=1 00:07:27.662 --rc genhtml_legend=1 00:07:27.662 --rc geninfo_all_blocks=1 00:07:27.662 --rc geninfo_unexecuted_blocks=1 00:07:27.662 00:07:27.662 ' 00:07:27.662 04:31:17 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:27.662 04:31:17 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:27.662 04:31:17 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:27.662 04:31:17 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.662 04:31:17 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61230 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61230 00:07:27.663 04:31:17 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:27.663 04:31:17 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 61230 ']' 00:07:27.663 04:31:17 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.663 04:31:17 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.663 04:31:17 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.663 04:31:17 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.663 04:31:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.993 [2024-10-15 04:31:17.219859] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:27.993 [2024-10-15 04:31:17.220787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61230 ] 00:07:27.993 [2024-10-15 04:31:17.394150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.253 [2024-10-15 04:31:17.503495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.189 04:31:18 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.189 04:31:18 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:07:29.189 04:31:18 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:29.189 04:31:18 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:29.189 04:31:18 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:29.189 04:31:18 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:29.189 04:31:18 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:29.189 04:31:18 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:29.189 04:31:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.189 04:31:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.447 04:31:18 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.447 04:31:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:29.447 04:31:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.447 04:31:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.447 04:31:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.447 04:31:18 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:29.447 04:31:18 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:29.447 04:31:18 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.447 04:31:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.716 04:31:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.716 04:31:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:29.716 04:31:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:29.717 04:31:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8496c88e-9ca1-4e73-bf84-f7864386e96b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8496c88e-9ca1-4e73-bf84-f7864386e96b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "de954fdc-1baa-4d0e-802c-55b39e46e2bd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "de954fdc-1baa-4d0e-802c-55b39e46e2bd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e82608ce-d220-46a1-9a0b-ceae1f308992"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e82608ce-d220-46a1-9a0b-ceae1f308992",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "903ca821-cf3c-47cf-8ef3-4e9946839a77"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "903ca821-cf3c-47cf-8ef3-4e9946839a77",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2387dfee-43ec-4296-8b5a-833bb83c5d59"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2387dfee-43ec-4296-8b5a-833bb83c5d59",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "aea37986-19a5-43e9-90b1-25b0bfb4b0a6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "aea37986-19a5-43e9-90b1-25b0bfb4b0a6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:29.717 04:31:19 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:29.717 04:31:19 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:29.717 04:31:19 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:29.717 04:31:19 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61230 00:07:29.717 04:31:19 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 61230 ']' 00:07:29.717 04:31:19 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 61230 00:07:29.717 04:31:19 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:07:29.717 04:31:19 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.717 04:31:19 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61230 00:07:29.717 killing process with pid 61230 00:07:29.717 04:31:19 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.717 04:31:19 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.717 04:31:19 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61230' 00:07:29.717 04:31:19 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 61230 00:07:29.717 04:31:19 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 61230 00:07:32.294 04:31:21 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:32.294 04:31:21 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:32.294 04:31:21 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:32.294 04:31:21 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.294 04:31:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.294 ************************************ 00:07:32.294 START TEST bdev_hello_world 00:07:32.294 ************************************ 00:07:32.294 04:31:21 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:32.294 [2024-10-15 04:31:21.617558] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:32.294 [2024-10-15 04:31:21.617949] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61331 ] 00:07:32.294 [2024-10-15 04:31:21.790623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.553 [2024-10-15 04:31:21.907628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.120 [2024-10-15 04:31:22.571501] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:33.120 [2024-10-15 04:31:22.571761] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:33.120 [2024-10-15 04:31:22.571796] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:33.120 [2024-10-15 04:31:22.574790] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:33.120 [2024-10-15 04:31:22.575547] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:33.120 [2024-10-15 04:31:22.575583] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:33.120 [2024-10-15 04:31:22.575812] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:33.120 00:07:33.120 [2024-10-15 04:31:22.575847] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:34.497 00:07:34.497 real 0m2.183s 00:07:34.497 user 0m1.820s 00:07:34.497 sys 0m0.254s 00:07:34.497 04:31:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.497 ************************************ 00:07:34.497 END TEST bdev_hello_world 00:07:34.497 ************************************ 00:07:34.497 04:31:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:34.497 04:31:23 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:34.497 04:31:23 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:34.497 04:31:23 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.497 04:31:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.497 ************************************ 00:07:34.497 START TEST bdev_bounds 00:07:34.497 ************************************ 00:07:34.497 04:31:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:34.497 Process bdevio pid: 61373 00:07:34.497 04:31:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61373 00:07:34.497 04:31:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:34.497 04:31:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:34.497 04:31:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61373' 00:07:34.497 04:31:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61373 00:07:34.497 04:31:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61373 ']' 00:07:34.497 04:31:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.497 04:31:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.497 04:31:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.498 04:31:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.498 04:31:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:34.498 [2024-10-15 04:31:23.863176] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:34.498 [2024-10-15 04:31:23.863308] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61373 ] 00:07:34.756 [2024-10-15 04:31:24.028242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.756 [2024-10-15 04:31:24.160119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.756 [2024-10-15 04:31:24.160216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.756 [2024-10-15 04:31:24.160245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.739 04:31:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.739 04:31:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:35.739 04:31:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:35.739 I/O targets: 00:07:35.739 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:35.739 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:35.739 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:35.739 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:35.739 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:35.739 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:35.739 00:07:35.739 00:07:35.739 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.739 http://cunit.sourceforge.net/ 00:07:35.739 00:07:35.739 00:07:35.739 Suite: bdevio tests on: Nvme3n1 00:07:35.739 Test: blockdev write read block ...passed 00:07:35.739 Test: blockdev write zeroes read block ...passed 00:07:35.739 Test: blockdev write zeroes read no split ...passed 00:07:35.739 Test: blockdev write zeroes read split ...passed 00:07:35.739 Test: blockdev write zeroes read split partial ...passed 00:07:35.739 Test: blockdev reset ...[2024-10-15 04:31:25.093463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:35.739 [2024-10-15 04:31:25.097997] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:35.739 passed 00:07:35.739 Test: blockdev write read 8 blocks ...passed 00:07:35.739 Test: blockdev write read size > 128k ...passed 00:07:35.739 Test: blockdev write read invalid size ...passed 00:07:35.739 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:35.739 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:35.739 Test: blockdev write read max offset ...passed 00:07:35.739 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:35.739 Test: blockdev writev readv 8 blocks ...passed 00:07:35.739 Test: blockdev writev readv 30 x 1block ...passed 00:07:35.739 Test: blockdev writev readv block ...passed 00:07:35.739 Test: blockdev writev readv size > 128k ...passed 00:07:35.739 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:35.739 Test: blockdev comparev and writev ...[2024-10-15 04:31:25.108528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf40a000 len:0x1000 00:07:35.739 passed 00:07:35.739 Test: blockdev nvme passthru rw ...[2024-10-15 04:31:25.108847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:35.739 passed 00:07:35.739 Test: blockdev nvme passthru vendor specific ...[2024-10-15 04:31:25.109863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:35.739 passed 00:07:35.739 Test: blockdev nvme admin passthru ...[2024-10-15 04:31:25.110100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:35.739 passed 00:07:35.739 Test: blockdev copy ...passed 00:07:35.739 Suite: bdevio tests on: Nvme2n3 00:07:35.739 Test: blockdev write read block ...passed 00:07:35.739 Test: blockdev write zeroes read block ...passed 00:07:35.739 Test: blockdev write zeroes read no split ...passed 00:07:35.739 Test: blockdev write zeroes read split ...passed 00:07:35.739 Test: blockdev write zeroes read split partial ...passed 00:07:35.739 Test: blockdev reset ...[2024-10-15 04:31:25.190344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:35.739 passed 00:07:35.739 Test: blockdev write read 8 blocks ...[2024-10-15 04:31:25.195000] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:35.739 passed 00:07:35.739 Test: blockdev write read size > 128k ...passed 00:07:35.739 Test: blockdev write read invalid size ...passed 00:07:35.739 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:35.739 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:35.739 Test: blockdev write read max offset ...passed 00:07:35.739 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:35.740 Test: blockdev writev readv 8 blocks ...passed 00:07:35.740 Test: blockdev writev readv 30 x 1block ...passed 00:07:35.740 Test: blockdev writev readv block ...passed 00:07:35.740 Test: blockdev writev readv size > 128k ...passed 00:07:35.740 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:35.740 Test: blockdev comparev and writev ...[2024-10-15 04:31:25.210280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a2606000 len:0x1000 00:07:35.740 passed 00:07:35.740 Test: blockdev nvme passthru rw ...[2024-10-15 04:31:25.210668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:35.740 passed 00:07:35.740 Test: blockdev nvme passthru vendor specific ...[2024-10-15 04:31:25.211807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:35.740 passed 00:07:35.740 Test: blockdev nvme admin passthru ...[2024-10-15 04:31:25.211942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:35.740 passed 00:07:35.740 Test: blockdev copy ...passed 00:07:35.740 Suite: bdevio tests on: Nvme2n2 00:07:35.740 Test: blockdev write read block ...passed 00:07:35.740 Test: blockdev write zeroes read block ...passed 00:07:35.740 Test: blockdev write zeroes read no split ...passed 00:07:35.999 Test: blockdev write zeroes read split ...passed 00:07:35.999 Test: blockdev write zeroes read split partial ...passed 00:07:35.999 Test: blockdev reset ...[2024-10-15 04:31:25.307510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:35.999 [2024-10-15 04:31:25.312081] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:35.999 passed 00:07:35.999 Test: blockdev write read 8 blocks ...passed 00:07:35.999 Test: blockdev write read size > 128k ...passed 00:07:35.999 Test: blockdev write read invalid size ...passed 00:07:35.999 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:35.999 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:35.999 Test: blockdev write read max offset ...passed 00:07:35.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:35.999 Test: blockdev writev readv 8 blocks ...passed 00:07:35.999 Test: blockdev writev readv 30 x 1block ...passed 00:07:35.999 Test: blockdev writev readv block ...passed 00:07:35.999 Test: blockdev writev readv size > 128k ...passed 00:07:35.999 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:35.999 Test: blockdev comparev and writev ...[2024-10-15 04:31:25.320673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf43c000 len:0x1000 00:07:35.999 passed 00:07:35.999 Test: blockdev nvme passthru rw ...[2024-10-15 04:31:25.320949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:35.999 passed 00:07:35.999 Test: blockdev nvme passthru vendor specific ...[2024-10-15 04:31:25.321782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:35.999 passed 00:07:35.999 Test: blockdev nvme admin passthru ...[2024-10-15 04:31:25.322030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:35.999 passed 00:07:35.999 Test: blockdev copy ...passed 00:07:35.999 Suite: bdevio tests on: Nvme2n1 00:07:35.999 Test: blockdev write read block ...passed 00:07:35.999 Test: blockdev write zeroes read block ...passed 00:07:35.999 Test: blockdev write zeroes read no split ...passed 00:07:35.999 Test: blockdev write zeroes read split ...passed 00:07:35.999 Test: blockdev write zeroes read split partial ...passed 00:07:35.999 Test: blockdev reset ...[2024-10-15 04:31:25.394579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:35.999 [2024-10-15 04:31:25.399180] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:36.000 passed 00:07:36.000 Test: blockdev write read 8 blocks ...passed 00:07:36.000 Test: blockdev write read size > 128k ...passed 00:07:36.000 Test: blockdev write read invalid size ...passed 00:07:36.000 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:36.000 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:36.000 Test: blockdev write read max offset ...passed 00:07:36.000 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:36.000 Test: blockdev writev readv 8 blocks ...passed 00:07:36.000 Test: blockdev writev readv 30 x 1block ...passed 00:07:36.000 Test: blockdev writev readv block ...passed 00:07:36.000 Test: blockdev writev readv size > 128k ...passed 00:07:36.000 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:36.000 Test: blockdev comparev and writev ...[2024-10-15 04:31:25.408541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf438000 len:0x1000 00:07:36.000 passed 00:07:36.000 Test: blockdev nvme passthru rw ...[2024-10-15 04:31:25.408806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:36.000 passed 00:07:36.000 Test: blockdev nvme passthru vendor specific ...[2024-10-15 04:31:25.410081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:36.000 [2024-10-15 04:31:25.410193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:36.000 passed 00:07:36.000 Test: blockdev nvme admin passthru ...passed 00:07:36.000 Test: blockdev copy ...passed 00:07:36.000 Suite: bdevio tests on: Nvme1n1 00:07:36.000 Test: blockdev write read block ...passed 00:07:36.000 Test: blockdev write zeroes read block ...passed 00:07:36.000 Test: blockdev write zeroes read no split ...passed 00:07:36.000 Test: blockdev write zeroes read split ...passed 00:07:36.000 Test: blockdev write zeroes read split partial ...passed 00:07:36.000 Test: blockdev reset ...[2024-10-15 04:31:25.502144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:36.258 passed 00:07:36.258 Test: blockdev write read 8 blocks ...[2024-10-15 04:31:25.506463] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:36.258 passed 00:07:36.258 Test: blockdev write read size > 128k ...passed 00:07:36.258 Test: blockdev write read invalid size ...passed 00:07:36.258 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:36.258 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:36.258 Test: blockdev write read max offset ...passed 00:07:36.258 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:36.258 Test: blockdev writev readv 8 blocks ...passed 00:07:36.258 Test: blockdev writev readv 30 x 1block ...passed 00:07:36.258 Test: blockdev writev readv block ...passed 00:07:36.258 Test: blockdev writev readv size > 128k ...passed 00:07:36.258 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:36.259 Test: blockdev comparev and writev ...[2024-10-15 04:31:25.515296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf434000 len:0x1000 00:07:36.259 passed 00:07:36.259 Test: blockdev nvme passthru rw ...[2024-10-15 04:31:25.515544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:36.259 passed 00:07:36.259 Test: blockdev nvme passthru vendor specific ...[2024-10-15 04:31:25.516403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:36.259 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:36.259 [2024-10-15 04:31:25.516639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:36.259 passed 00:07:36.259 Test: blockdev copy ...passed 00:07:36.259 Suite: bdevio tests on: Nvme0n1 00:07:36.259 Test: blockdev write read block ...passed 00:07:36.259 Test: blockdev write zeroes read block ...passed 00:07:36.259 Test: blockdev write zeroes read no split ...passed 00:07:36.259 Test: blockdev write zeroes read split ...passed 00:07:36.259 Test: blockdev write zeroes read split partial ...passed 00:07:36.259 Test: blockdev reset ...[2024-10-15 04:31:25.663553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:36.259 [2024-10-15 04:31:25.667583] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:36.259 passed 00:07:36.259 Test: blockdev write read 8 blocks ...passed 00:07:36.259 Test: blockdev write read size > 128k ...passed 00:07:36.259 Test: blockdev write read invalid size ...passed 00:07:36.259 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:36.259 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:36.259 Test: blockdev write read max offset ...passed 00:07:36.259 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:36.259 Test: blockdev writev readv 8 blocks ...passed 00:07:36.259 Test: blockdev writev readv 30 x 1block ...passed 00:07:36.259 Test: blockdev writev readv block ...passed 00:07:36.259 Test: blockdev writev readv size > 128k ...passed 00:07:36.259 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:36.259 Test: blockdev comparev and writev ...[2024-10-15 04:31:25.676072] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 spassedince it has 00:07:36.259 separate metadata which is not supported yet. 00:07:36.259 00:07:36.259 Test: blockdev nvme passthru rw ...passed 00:07:36.259 Test: blockdev nvme passthru vendor specific ...[2024-10-15 04:31:25.676772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:36.259 [2024-10-15 04:31:25.677031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:36.259 passed 00:07:36.259 Test: blockdev nvme admin passthru ...passed 00:07:36.259 Test: blockdev copy ...passed 00:07:36.259 00:07:36.259 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.259 suites 6 6 n/a 0 0 00:07:36.259 tests 138 138 138 0 0 00:07:36.259 asserts 893 893 893 0 n/a 00:07:36.259 00:07:36.259 Elapsed time = 1.789 seconds 00:07:36.259 0 00:07:36.259 04:31:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61373 00:07:36.259 04:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61373 ']' 00:07:36.259 04:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61373 00:07:36.259 04:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:36.259 04:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.259 04:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61373 00:07:36.259 killing process with pid 61373 00:07:36.259 04:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.259 04:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.259 04:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61373' 00:07:36.259 04:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61373 00:07:36.259 04:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61373 00:07:37.635 ************************************ 00:07:37.635 END TEST bdev_bounds 00:07:37.635 ************************************ 00:07:37.635 04:31:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:37.635 00:07:37.635 real 0m3.090s 00:07:37.635 user 0m7.941s 00:07:37.635 sys 0m0.437s 00:07:37.635 04:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.635 04:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:37.635 04:31:26 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:37.635 04:31:26 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:37.635 04:31:26 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.635 04:31:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.635 ************************************ 00:07:37.635 START TEST bdev_nbd 00:07:37.635 ************************************ 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61441 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61441 /var/tmp/spdk-nbd.sock 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61441 ']' 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:37.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.635 04:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:37.635 [2024-10-15 04:31:27.043212] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:37.635 [2024-10-15 04:31:27.043642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.893 [2024-10-15 04:31:27.237951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.893 [2024-10-15 04:31:27.397135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:38.878 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.137 1+0 records in 00:07:39.137 1+0 records out 00:07:39.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000986117 s, 4.2 MB/s 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:39.137 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.397 1+0 records in 00:07:39.397 1+0 records out 00:07:39.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363128 s, 11.3 MB/s 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:39.397 04:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.658 1+0 records in 00:07:39.658 1+0 records out 00:07:39.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705638 s, 5.8 MB/s 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:39.658 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.917 1+0 records in 00:07:39.917 1+0 records out 00:07:39.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010709 s, 3.8 MB/s 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:39.917 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.177 1+0 records in 00:07:40.177 1+0 records out 00:07:40.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669566 s, 6.1 MB/s 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:40.177 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:40.435 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.435 1+0 records in 00:07:40.435 1+0 records out 00:07:40.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585415 s, 7.0 MB/s 00:07:40.693 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.693 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:40.693 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.693 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:40.693 04:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:40.693 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.693 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:40.693 04:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.693 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:40.693 { 00:07:40.693 "nbd_device": "/dev/nbd0", 00:07:40.693 "bdev_name": "Nvme0n1" 00:07:40.693 }, 00:07:40.693 { 00:07:40.693 "nbd_device": "/dev/nbd1", 00:07:40.693 "bdev_name": "Nvme1n1" 00:07:40.693 }, 00:07:40.693 { 00:07:40.693 "nbd_device": "/dev/nbd2", 00:07:40.693 "bdev_name": "Nvme2n1" 00:07:40.693 }, 00:07:40.693 { 00:07:40.693 "nbd_device": "/dev/nbd3", 00:07:40.693 "bdev_name": "Nvme2n2" 00:07:40.693 }, 00:07:40.693 { 00:07:40.693 "nbd_device": "/dev/nbd4", 00:07:40.693 "bdev_name": "Nvme2n3" 00:07:40.693 }, 00:07:40.693 { 00:07:40.693 "nbd_device": "/dev/nbd5", 00:07:40.693 "bdev_name": "Nvme3n1" 00:07:40.693 } 00:07:40.693 ]' 00:07:40.984 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:40.984 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:40.984 { 00:07:40.984 "nbd_device": "/dev/nbd0", 00:07:40.984 "bdev_name": "Nvme0n1" 00:07:40.984 }, 00:07:40.984 { 00:07:40.984 "nbd_device": "/dev/nbd1", 00:07:40.984 "bdev_name": "Nvme1n1" 00:07:40.984 }, 00:07:40.984 { 00:07:40.984 "nbd_device": "/dev/nbd2", 00:07:40.984 "bdev_name": "Nvme2n1" 00:07:40.984 }, 00:07:40.984 { 00:07:40.984 "nbd_device": "/dev/nbd3", 00:07:40.984 "bdev_name": "Nvme2n2" 00:07:40.984 }, 00:07:40.984 { 00:07:40.984 "nbd_device": "/dev/nbd4", 00:07:40.984 "bdev_name": "Nvme2n3" 00:07:40.984 }, 00:07:40.984 { 00:07:40.984 "nbd_device": "/dev/nbd5", 00:07:40.984 "bdev_name": "Nvme3n1" 00:07:40.984 } 00:07:40.984 ]' 00:07:40.984 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:40.984 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:40.984 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.984 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:40.984 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:40.984 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:40.984 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.984 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:40.984 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.243 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:41.502 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:41.502 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:41.502 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:41.502 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.502 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.502 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:41.502 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.502 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.502 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.502 04:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.068 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:42.326 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:42.326 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:42.326 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:42.326 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.326 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.326 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:42.584 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.584 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.584 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.584 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.584 04:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:42.842 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:43.100 /dev/nbd0 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.100 1+0 records in 00:07:43.100 1+0 records out 00:07:43.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678786 s, 6.0 MB/s 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:43.100 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:43.359 /dev/nbd1 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.359 1+0 records in 00:07:43.359 1+0 records out 00:07:43.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444324 s, 9.2 MB/s 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:43.359 04:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:43.618 /dev/nbd10 00:07:43.618 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:43.618 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:43.618 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:43.618 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:43.618 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:43.618 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:43.618 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:43.618 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:43.618 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:43.619 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:43.619 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.619 1+0 records in 00:07:43.619 1+0 records out 00:07:43.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513205 s, 8.0 MB/s 00:07:43.619 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.619 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:43.619 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.619 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:43.619 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:43.619 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.619 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:43.619 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:43.915 /dev/nbd11 00:07:43.915 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:43.915 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:43.915 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:43.915 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:43.915 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:43.915 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.916 1+0 records in 00:07:43.916 1+0 records out 00:07:43.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700956 s, 5.8 MB/s 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:43.916 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:44.198 /dev/nbd12 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.198 1+0 records in 00:07:44.198 1+0 records out 00:07:44.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000773033 s, 5.3 MB/s 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:44.198 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.199 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:44.199 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:44.456 /dev/nbd13 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.715 1+0 records in 00:07:44.715 1+0 records out 00:07:44.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108693 s, 3.8 MB/s 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.715 04:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.715 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd0", 00:07:44.715 "bdev_name": "Nvme0n1" 00:07:44.715 }, 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd1", 00:07:44.715 "bdev_name": "Nvme1n1" 00:07:44.715 }, 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd10", 00:07:44.715 "bdev_name": "Nvme2n1" 00:07:44.715 }, 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd11", 00:07:44.715 "bdev_name": "Nvme2n2" 00:07:44.715 }, 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd12", 00:07:44.715 "bdev_name": "Nvme2n3" 00:07:44.715 }, 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd13", 00:07:44.715 "bdev_name": "Nvme3n1" 00:07:44.715 } 00:07:44.715 ]' 00:07:44.715 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.715 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd0", 00:07:44.715 "bdev_name": "Nvme0n1" 00:07:44.715 }, 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd1", 00:07:44.715 "bdev_name": "Nvme1n1" 00:07:44.715 }, 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd10", 00:07:44.715 "bdev_name": "Nvme2n1" 00:07:44.715 }, 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd11", 00:07:44.715 "bdev_name": "Nvme2n2" 00:07:44.715 }, 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd12", 00:07:44.715 "bdev_name": "Nvme2n3" 00:07:44.715 }, 00:07:44.715 { 00:07:44.715 "nbd_device": "/dev/nbd13", 00:07:44.715 "bdev_name": "Nvme3n1" 00:07:44.715 } 00:07:44.715 ]' 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:44.975 /dev/nbd1 00:07:44.975 /dev/nbd10 00:07:44.975 /dev/nbd11 00:07:44.975 /dev/nbd12 00:07:44.975 /dev/nbd13' 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:44.975 /dev/nbd1 00:07:44.975 /dev/nbd10 00:07:44.975 /dev/nbd11 00:07:44.975 /dev/nbd12 00:07:44.975 /dev/nbd13' 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:44.975 256+0 records in 00:07:44.975 256+0 records out 00:07:44.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125563 s, 83.5 MB/s 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:44.975 256+0 records in 00:07:44.975 256+0 records out 00:07:44.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126915 s, 8.3 MB/s 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.975 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:45.300 256+0 records in 00:07:45.300 256+0 records out 00:07:45.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126844 s, 8.3 MB/s 00:07:45.300 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.300 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:45.300 256+0 records in 00:07:45.300 256+0 records out 00:07:45.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120737 s, 8.7 MB/s 00:07:45.300 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.300 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:45.300 256+0 records in 00:07:45.300 256+0 records out 00:07:45.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127022 s, 8.3 MB/s 00:07:45.300 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.300 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:45.560 256+0 records in 00:07:45.560 256+0 records out 00:07:45.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131012 s, 8.0 MB/s 00:07:45.560 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.560 04:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:45.560 256+0 records in 00:07:45.560 256+0 records out 00:07:45.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127262 s, 8.2 MB/s 00:07:45.560 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:45.560 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:45.560 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.560 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:45.560 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:45.560 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:45.560 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:45.560 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.560 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:45.560 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.560 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.819 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.077 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.077 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.077 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.077 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.077 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.077 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.077 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.077 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.077 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.077 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.336 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.336 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.336 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.336 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.336 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.336 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.336 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.336 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.336 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.336 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:46.594 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:46.594 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:46.594 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:46.594 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.594 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.594 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:46.594 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.594 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.594 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.594 04:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.856 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.143 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.402 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:47.402 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.402 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:47.661 04:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:47.920 malloc_lvol_verify 00:07:47.920 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:48.178 cb7f6ccf-c5df-4c09-8361-0d510ad99204 00:07:48.178 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:48.437 232c0103-89d4-4704-bebd-685240a45014 00:07:48.437 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:48.437 /dev/nbd0 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:48.695 mke2fs 1.47.0 (5-Feb-2023) 00:07:48.695 Discarding device blocks: 0/4096 done 00:07:48.695 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:48.695 00:07:48.695 Allocating group tables: 0/1 done 00:07:48.695 Writing inode tables: 0/1 done 00:07:48.695 Creating journal (1024 blocks): done 00:07:48.695 Writing superblocks and filesystem accounting information: 0/1 done 00:07:48.695 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.695 04:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61441 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61441 ']' 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61441 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61441 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.953 killing process with pid 61441 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61441' 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61441 00:07:48.953 04:31:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61441 00:07:50.328 04:31:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:50.328 00:07:50.328 real 0m12.666s 00:07:50.328 user 0m16.821s 00:07:50.328 sys 0m4.987s 00:07:50.328 04:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.328 04:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:50.328 ************************************ 00:07:50.328 END TEST bdev_nbd 00:07:50.328 ************************************ 00:07:50.328 04:31:39 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:50.328 04:31:39 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:50.328 skipping fio tests on NVMe due to multi-ns failures. 00:07:50.328 04:31:39 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:50.328 04:31:39 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:50.328 04:31:39 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:50.328 04:31:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:50.328 04:31:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.328 04:31:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.328 ************************************ 00:07:50.328 START TEST bdev_verify 00:07:50.328 ************************************ 00:07:50.328 04:31:39 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:50.328 [2024-10-15 04:31:39.759659] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:50.328 [2024-10-15 04:31:39.759861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61841 ] 00:07:50.587 [2024-10-15 04:31:39.939510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:50.587 [2024-10-15 04:31:40.080024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.587 [2024-10-15 04:31:40.080034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.522 Running I/O for 5 seconds... 00:07:53.532 21632.00 IOPS, 84.50 MiB/s [2024-10-15T04:31:44.411Z] 21280.00 IOPS, 83.12 MiB/s [2024-10-15T04:31:45.348Z] 21354.67 IOPS, 83.42 MiB/s [2024-10-15T04:31:46.283Z] 20960.00 IOPS, 81.88 MiB/s [2024-10-15T04:31:46.283Z] 21158.40 IOPS, 82.65 MiB/s 00:07:56.779 Latency(us) 00:07:56.779 [2024-10-15T04:31:46.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.779 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0x0 length 0xbd0bd 00:07:56.779 Nvme0n1 : 5.07 1691.34 6.61 0.00 0.00 75512.86 10369.95 115385.47 00:07:56.779 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:56.779 Nvme0n1 : 5.04 1801.98 7.04 0.00 0.00 70855.48 15160.13 67799.49 00:07:56.779 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0x0 length 0xa0000 00:07:56.779 Nvme1n1 : 5.06 1682.11 6.57 0.00 0.00 75573.03 8632.85 119596.62 00:07:56.779 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0xa0000 length 0xa0000 00:07:56.779 Nvme1n1 : 5.05 1801.26 7.04 0.00 0.00 70778.24 16107.64 67378.38 00:07:56.779 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0x0 length 0x80000 00:07:56.779 Nvme2n1 : 5.07 1690.21 6.60 0.00 0.00 75077.10 11106.90 123807.77 00:07:56.779 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0x80000 length 0x80000 00:07:56.779 Nvme2n1 : 5.05 1800.54 7.03 0.00 0.00 70704.05 15160.13 64851.69 00:07:56.779 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0x0 length 0x80000 00:07:56.779 Nvme2n2 : 5.08 1689.80 6.60 0.00 0.00 74915.41 11370.10 125492.23 00:07:56.779 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0x80000 length 0x80000 00:07:56.779 Nvme2n2 : 5.05 1799.81 7.03 0.00 0.00 70631.29 14844.30 62746.11 00:07:56.779 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0x0 length 0x80000 00:07:56.779 Nvme2n3 : 5.08 1689.39 6.60 0.00 0.00 74800.75 11528.02 126334.46 00:07:56.779 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0x80000 length 0x80000 00:07:56.779 Nvme2n3 : 5.05 1799.08 7.03 0.00 0.00 70561.69 12686.09 64851.69 00:07:56.779 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0x0 length 0x20000 00:07:56.779 Nvme3n1 : 5.08 1688.98 6.60 0.00 0.00 74726.44 11791.22 118754.39 00:07:56.779 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:56.779 Verification LBA range: start 0x20000 length 0x20000 00:07:56.779 Nvme3n1 : 5.06 1808.98 7.07 0.00 0.00 70162.35 3447.88 66957.26 00:07:56.780 [2024-10-15T04:31:46.284Z] =================================================================================================================== 00:07:56.780 [2024-10-15T04:31:46.284Z] Total : 20943.46 81.81 0.00 0.00 72789.95 3447.88 126334.46 00:07:58.156 00:07:58.156 real 0m7.848s 00:07:58.156 user 0m14.443s 00:07:58.156 sys 0m0.334s 00:07:58.156 04:31:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.156 04:31:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:58.156 ************************************ 00:07:58.156 END TEST bdev_verify 00:07:58.156 ************************************ 00:07:58.156 04:31:47 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:58.156 04:31:47 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:58.156 04:31:47 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.156 04:31:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.156 ************************************ 00:07:58.156 START TEST bdev_verify_big_io 00:07:58.156 ************************************ 00:07:58.156 04:31:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:58.419 [2024-10-15 04:31:47.676235] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:07:58.419 [2024-10-15 04:31:47.676366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61949 ] 00:07:58.419 [2024-10-15 04:31:47.849640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:58.677 [2024-10-15 04:31:47.967013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.677 [2024-10-15 04:31:47.967047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.614 Running I/O for 5 seconds... 00:08:03.525 1159.00 IOPS, 72.44 MiB/s [2024-10-15T04:31:53.965Z] 2008.50 IOPS, 125.53 MiB/s [2024-10-15T04:31:54.534Z] 2615.33 IOPS, 163.46 MiB/s [2024-10-15T04:31:54.794Z] 2742.50 IOPS, 171.41 MiB/s 00:08:05.290 Latency(us) 00:08:05.290 [2024-10-15T04:31:54.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.290 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0x0 length 0xbd0b 00:08:05.290 Nvme0n1 : 5.61 158.73 9.92 0.00 0.00 787784.54 34531.42 801802.69 00:08:05.290 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:05.290 Nvme0n1 : 5.60 154.45 9.65 0.00 0.00 796155.01 25266.89 811909.45 00:08:05.290 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0x0 length 0xa000 00:08:05.290 Nvme1n1 : 5.61 159.70 9.98 0.00 0.00 765522.90 75800.67 690628.37 00:08:05.290 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0xa000 length 0xa000 00:08:05.290 Nvme1n1 : 5.60 159.92 10.00 0.00 0.00 762516.34 73695.10 680521.61 00:08:05.290 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0x0 length 0x8000 00:08:05.290 Nvme2n1 : 5.61 159.64 9.98 0.00 0.00 747153.40 75800.67 690628.37 00:08:05.290 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0x8000 length 0x8000 00:08:05.290 Nvme2n1 : 5.60 159.87 9.99 0.00 0.00 743740.07 72431.76 697366.21 00:08:05.290 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0x0 length 0x8000 00:08:05.290 Nvme2n2 : 5.66 162.78 10.17 0.00 0.00 715300.28 43164.27 707472.96 00:08:05.290 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0x8000 length 0x8000 00:08:05.290 Nvme2n2 : 5.66 163.37 10.21 0.00 0.00 710352.48 54323.82 714210.80 00:08:05.290 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0x0 length 0x8000 00:08:05.290 Nvme2n3 : 5.69 168.79 10.55 0.00 0.00 675648.25 24740.50 724317.56 00:08:05.290 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0x8000 length 0x8000 00:08:05.290 Nvme2n3 : 5.72 174.36 10.90 0.00 0.00 651922.93 22003.25 731055.40 00:08:05.290 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0x0 length 0x2000 00:08:05.290 Nvme3n1 : 5.71 179.41 11.21 0.00 0.00 622259.69 8843.41 741162.15 00:08:05.290 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.290 Verification LBA range: start 0x2000 length 0x2000 00:08:05.290 Nvme3n1 : 5.73 183.49 11.47 0.00 0.00 605464.71 1960.82 744531.07 00:08:05.290 [2024-10-15T04:31:54.794Z] =================================================================================================================== 00:08:05.290 [2024-10-15T04:31:54.794Z] Total : 1984.52 124.03 0.00 0.00 711660.93 1960.82 811909.45 00:08:07.189 00:08:07.189 real 0m8.945s 00:08:07.189 user 0m16.633s 00:08:07.189 sys 0m0.339s 00:08:07.189 04:31:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.189 ************************************ 00:08:07.189 END TEST bdev_verify_big_io 00:08:07.189 ************************************ 00:08:07.189 04:31:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:07.189 04:31:56 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.189 04:31:56 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:07.189 04:31:56 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.189 04:31:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.189 ************************************ 00:08:07.189 START TEST bdev_write_zeroes 00:08:07.189 ************************************ 00:08:07.189 04:31:56 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.189 [2024-10-15 04:31:56.657557] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:08:07.189 [2024-10-15 04:31:56.657703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62059 ] 00:08:07.447 [2024-10-15 04:31:56.825732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.706 [2024-10-15 04:31:56.976917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.271 Running I/O for 1 seconds... 00:08:09.643 69041.00 IOPS, 269.69 MiB/s 00:08:09.643 Latency(us) 00:08:09.643 [2024-10-15T04:31:59.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.643 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.643 Nvme0n1 : 1.02 11201.97 43.76 0.00 0.00 11391.52 4448.03 33689.19 00:08:09.643 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.643 Nvme1n1 : 1.02 11548.54 45.11 0.00 0.00 11036.51 7737.99 28635.81 00:08:09.643 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.643 Nvme2n1 : 1.02 11579.08 45.23 0.00 0.00 10952.91 6185.12 23056.04 00:08:09.643 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.643 Nvme2n2 : 1.02 11544.83 45.10 0.00 0.00 10930.03 6211.44 24951.06 00:08:09.643 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.643 Nvme2n3 : 1.02 11562.53 45.17 0.00 0.00 10883.65 6500.96 23266.60 00:08:09.643 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.643 Nvme3n1 : 1.02 11489.51 44.88 0.00 0.00 10932.30 6527.28 29056.93 00:08:09.643 [2024-10-15T04:31:59.147Z] =================================================================================================================== 00:08:09.643 [2024-10-15T04:31:59.147Z] Total : 68926.46 269.24 0.00 0.00 11019.01 4448.03 33689.19 00:08:10.578 00:08:10.578 real 0m3.369s 00:08:10.578 user 0m2.974s 00:08:10.578 sys 0m0.273s 00:08:10.578 04:31:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.578 04:31:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:10.578 ************************************ 00:08:10.578 END TEST bdev_write_zeroes 00:08:10.578 ************************************ 00:08:10.578 04:31:59 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:10.578 04:31:59 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:10.578 04:31:59 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.578 04:31:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.578 ************************************ 00:08:10.578 START TEST bdev_json_nonenclosed 00:08:10.578 ************************************ 00:08:10.578 04:32:00 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:10.836 [2024-10-15 04:32:00.102469] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:08:10.836 [2024-10-15 04:32:00.102595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62118 ] 00:08:10.836 [2024-10-15 04:32:00.273095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.092 [2024-10-15 04:32:00.396448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.092 [2024-10-15 04:32:00.396575] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:11.092 [2024-10-15 04:32:00.396614] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:11.092 [2024-10-15 04:32:00.396633] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.349 00:08:11.349 real 0m0.649s 00:08:11.349 user 0m0.402s 00:08:11.349 sys 0m0.143s 00:08:11.349 04:32:00 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.349 ************************************ 00:08:11.349 END TEST bdev_json_nonenclosed 00:08:11.349 ************************************ 00:08:11.349 04:32:00 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:11.349 04:32:00 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.349 04:32:00 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:11.349 04:32:00 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.349 04:32:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.349 ************************************ 00:08:11.349 START TEST bdev_json_nonarray 00:08:11.349 ************************************ 00:08:11.349 04:32:00 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.349 [2024-10-15 04:32:00.822526] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:08:11.349 [2024-10-15 04:32:00.822661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62144 ] 00:08:11.607 [2024-10-15 04:32:00.993184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.607 [2024-10-15 04:32:01.106995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.607 [2024-10-15 04:32:01.107128] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:11.607 [2024-10-15 04:32:01.107164] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:11.607 [2024-10-15 04:32:01.107184] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.864 00:08:11.864 real 0m0.636s 00:08:11.864 user 0m0.402s 00:08:11.864 sys 0m0.130s 00:08:11.864 04:32:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.864 04:32:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:11.864 ************************************ 00:08:11.864 END TEST bdev_json_nonarray 00:08:11.864 ************************************ 00:08:12.122 04:32:01 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:12.122 04:32:01 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:12.122 04:32:01 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:12.123 04:32:01 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:12.123 04:32:01 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:12.123 04:32:01 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:12.123 04:32:01 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:12.123 04:32:01 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:12.123 04:32:01 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:12.123 04:32:01 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:12.123 04:32:01 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:12.123 00:08:12.123 real 0m44.592s 00:08:12.123 user 1m6.212s 00:08:12.123 sys 0m8.064s 00:08:12.123 04:32:01 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.123 04:32:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.123 ************************************ 00:08:12.123 END TEST blockdev_nvme 00:08:12.123 ************************************ 00:08:12.123 04:32:01 -- spdk/autotest.sh@209 -- # uname -s 00:08:12.123 04:32:01 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:12.123 04:32:01 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:12.123 04:32:01 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:12.123 04:32:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.123 04:32:01 -- common/autotest_common.sh@10 -- # set +x 00:08:12.123 ************************************ 00:08:12.123 START TEST blockdev_nvme_gpt 00:08:12.123 ************************************ 00:08:12.123 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:12.382 * Looking for test storage... 00:08:12.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:12.382 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:12.382 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:08:12.382 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:12.382 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.382 04:32:01 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:12.382 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.382 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:12.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.382 --rc genhtml_branch_coverage=1 00:08:12.382 --rc genhtml_function_coverage=1 00:08:12.382 --rc genhtml_legend=1 00:08:12.382 --rc geninfo_all_blocks=1 00:08:12.382 --rc geninfo_unexecuted_blocks=1 00:08:12.382 00:08:12.382 ' 00:08:12.382 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:12.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.382 --rc genhtml_branch_coverage=1 00:08:12.382 --rc genhtml_function_coverage=1 00:08:12.382 --rc genhtml_legend=1 00:08:12.382 --rc geninfo_all_blocks=1 00:08:12.382 --rc geninfo_unexecuted_blocks=1 00:08:12.382 00:08:12.382 ' 00:08:12.382 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:12.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.382 --rc genhtml_branch_coverage=1 00:08:12.382 --rc genhtml_function_coverage=1 00:08:12.382 --rc genhtml_legend=1 00:08:12.382 --rc geninfo_all_blocks=1 00:08:12.382 --rc geninfo_unexecuted_blocks=1 00:08:12.382 00:08:12.382 ' 00:08:12.382 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:12.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.382 --rc genhtml_branch_coverage=1 00:08:12.382 --rc genhtml_function_coverage=1 00:08:12.382 --rc genhtml_legend=1 00:08:12.382 --rc geninfo_all_blocks=1 00:08:12.382 --rc geninfo_unexecuted_blocks=1 00:08:12.382 00:08:12.382 ' 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62228 00:08:12.382 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:12.383 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:12.383 04:32:01 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62228 00:08:12.383 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 62228 ']' 00:08:12.383 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.383 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.383 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.383 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.383 04:32:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:12.641 [2024-10-15 04:32:01.895740] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:08:12.641 [2024-10-15 04:32:01.895885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62228 ] 00:08:12.641 [2024-10-15 04:32:02.066374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.927 [2024-10-15 04:32:02.179719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.874 04:32:03 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.874 04:32:03 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:08:13.874 04:32:03 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:13.874 04:32:03 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:13.874 04:32:03 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:14.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:14.450 Waiting for block devices as requested 00:08:14.450 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.709 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.709 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.709 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:20.050 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:20.050 BYT; 00:08:20.050 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:20.050 BYT; 00:08:20.050 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:20.050 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:20.050 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:20.050 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:20.050 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:20.050 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:20.050 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:20.050 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:20.051 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:20.051 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:20.051 04:32:09 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:20.051 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:20.051 04:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:21.051 The operation has completed successfully. 00:08:21.051 04:32:10 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:22.010 The operation has completed successfully. 00:08:22.268 04:32:11 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:22.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:23.402 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:23.402 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:23.402 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:23.661 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:23.661 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:23.661 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.661 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:23.661 [] 00:08:23.661 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.661 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:23.661 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:23.661 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:23.661 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:23.661 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:23.661 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.661 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:23.920 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.180 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.180 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:24.180 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.180 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.180 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.180 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:24.180 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:24.180 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.180 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:24.181 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8f3d7ada-df1a-432b-9d59-914dd09e0fb8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8f3d7ada-df1a-432b-9d59-914dd09e0fb8",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compar 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:24.181 e_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "abb483ca-e7c8-43e5-a7df-6f456de7d929"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "abb483ca-e7c8-43e5-a7df-6f456de7d929",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4e3a1724-ff72-48bd-a4d3-e1fea6a149fd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4e3a1724-ff72-48bd-a4d3-e1fea6a149fd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6ecb5c23-bf0d-4fac-a3fb-af3e177eef2e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6ecb5c23-bf0d-4fac-a3fb-af3e177eef2e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3f5eb28c-e84d-470a-b904-ed7f9f60d3b1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3f5eb28c-e84d-470a-b904-ed7f9f60d3b1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:24.440 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:24.440 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:24.440 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:24.440 04:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62228 00:08:24.440 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 62228 ']' 00:08:24.440 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 62228 00:08:24.440 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:08:24.440 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.440 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62228 00:08:24.440 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.440 killing process with pid 62228 00:08:24.440 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.440 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62228' 00:08:24.440 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 62228 00:08:24.440 04:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 62228 00:08:26.983 04:32:16 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:26.983 04:32:16 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:26.983 04:32:16 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:26.983 04:32:16 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.983 04:32:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.983 ************************************ 00:08:26.983 START TEST bdev_hello_world 00:08:26.983 ************************************ 00:08:26.983 04:32:16 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:26.983 [2024-10-15 04:32:16.252143] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:08:26.983 [2024-10-15 04:32:16.252263] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62869 ] 00:08:26.983 [2024-10-15 04:32:16.425656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.242 [2024-10-15 04:32:16.546391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.811 [2024-10-15 04:32:17.218831] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:27.811 [2024-10-15 04:32:17.218891] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:27.811 [2024-10-15 04:32:17.218919] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:27.811 [2024-10-15 04:32:17.221848] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:27.811 [2024-10-15 04:32:17.222372] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:27.811 [2024-10-15 04:32:17.222401] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:27.811 [2024-10-15 04:32:17.222628] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:27.811 00:08:27.811 [2024-10-15 04:32:17.222669] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:29.190 00:08:29.190 real 0m2.197s 00:08:29.190 user 0m1.835s 00:08:29.190 sys 0m0.251s 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:29.190 ************************************ 00:08:29.190 END TEST bdev_hello_world 00:08:29.190 ************************************ 00:08:29.190 04:32:18 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:29.190 04:32:18 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.190 04:32:18 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.190 04:32:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:29.190 ************************************ 00:08:29.190 START TEST bdev_bounds 00:08:29.190 ************************************ 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62917 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:29.190 Process bdevio pid: 62917 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62917' 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62917 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 62917 ']' 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.190 04:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:29.190 [2024-10-15 04:32:18.521448] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:08:29.190 [2024-10-15 04:32:18.521575] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62917 ] 00:08:29.485 [2024-10-15 04:32:18.693401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:29.485 [2024-10-15 04:32:18.812797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.485 [2024-10-15 04:32:18.812948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.485 [2024-10-15 04:32:18.813021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.051 04:32:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.051 04:32:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:08:30.051 04:32:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:30.309 I/O targets: 00:08:30.309 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:30.309 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:30.309 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:30.309 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:30.309 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:30.309 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:30.309 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:30.309 00:08:30.309 00:08:30.309 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.309 http://cunit.sourceforge.net/ 00:08:30.309 00:08:30.309 00:08:30.309 Suite: bdevio tests on: Nvme3n1 00:08:30.309 Test: blockdev write read block ...passed 00:08:30.309 Test: blockdev write zeroes read block ...passed 00:08:30.309 Test: blockdev write zeroes read no split ...passed 00:08:30.309 Test: blockdev write zeroes read split ...passed 00:08:30.309 Test: blockdev write zeroes read split partial ...passed 00:08:30.309 Test: blockdev reset ...[2024-10-15 04:32:19.689664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:30.309 [2024-10-15 04:32:19.693810] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.309 passed 00:08:30.309 Test: blockdev write read 8 blocks ...passed 00:08:30.309 Test: blockdev write read size > 128k ...passed 00:08:30.309 Test: blockdev write read invalid size ...passed 00:08:30.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.309 Test: blockdev write read max offset ...passed 00:08:30.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.309 Test: blockdev writev readv 8 blocks ...passed 00:08:30.309 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.309 Test: blockdev writev readv block ...passed 00:08:30.309 Test: blockdev writev readv size > 128k ...passed 00:08:30.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.309 Test: blockdev comparev and writev ...[2024-10-15 04:32:19.703404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bcc04000 len:0x1000 00:08:30.309 passed 00:08:30.309 Test: blockdev nvme passthru rw ...[2024-10-15 04:32:19.703715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.309 passed 00:08:30.309 Test: blockdev nvme passthru vendor specific ...[2024-10-15 04:32:19.704772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:30.309 passed 00:08:30.309 Test: blockdev nvme admin passthru ...[2024-10-15 04:32:19.705006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.309 passed 00:08:30.309 Test: blockdev copy ...passed 00:08:30.309 Suite: bdevio tests on: Nvme2n3 00:08:30.309 Test: blockdev write read block ...passed 00:08:30.309 Test: blockdev write zeroes read block ...passed 00:08:30.309 Test: blockdev write zeroes read no split ...passed 00:08:30.309 Test: blockdev write zeroes read split ...passed 00:08:30.309 Test: blockdev write zeroes read split partial ...passed 00:08:30.309 Test: blockdev reset ...[2024-10-15 04:32:19.785565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:30.309 passed 00:08:30.309 Test: blockdev write read 8 blocks ...[2024-10-15 04:32:19.789894] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.309 passed 00:08:30.309 Test: blockdev write read size > 128k ...passed 00:08:30.309 Test: blockdev write read invalid size ...passed 00:08:30.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.309 Test: blockdev write read max offset ...passed 00:08:30.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.309 Test: blockdev writev readv 8 blocks ...passed 00:08:30.309 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.309 Test: blockdev writev readv block ...passed 00:08:30.309 Test: blockdev writev readv size > 128k ...passed 00:08:30.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.309 Test: blockdev comparev and writev ...[2024-10-15 04:32:19.801343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:08:30.309 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2bcc02000 len:0x1000 00:08:30.309 [2024-10-15 04:32:19.801677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.309 passed 00:08:30.309 Test: blockdev nvme passthru vendor specific ...[2024-10-15 04:32:19.802930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:30.309 [2024-10-15 04:32:19.803054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.309 passed 00:08:30.309 Test: blockdev nvme admin passthru ...passed 00:08:30.309 Test: blockdev copy ...passed 00:08:30.309 Suite: bdevio tests on: Nvme2n2 00:08:30.309 Test: blockdev write read block ...passed 00:08:30.309 Test: blockdev write zeroes read block ...passed 00:08:30.568 Test: blockdev write zeroes read no split ...passed 00:08:30.568 Test: blockdev write zeroes read split ...passed 00:08:30.568 Test: blockdev write zeroes read split partial ...passed 00:08:30.568 Test: blockdev reset ...[2024-10-15 04:32:19.895558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:30.568 [2024-10-15 04:32:19.900080] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.568 passed 00:08:30.568 Test: blockdev write read 8 blocks ...passed 00:08:30.568 Test: blockdev write read size > 128k ...passed 00:08:30.568 Test: blockdev write read invalid size ...passed 00:08:30.568 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.568 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.568 Test: blockdev write read max offset ...passed 00:08:30.568 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.568 Test: blockdev writev readv 8 blocks ...passed 00:08:30.568 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.568 Test: blockdev writev readv block ...passed 00:08:30.568 Test: blockdev writev readv size > 128k ...passed 00:08:30.568 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.568 Test: blockdev comparev and writev ...[2024-10-15 04:32:19.910453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1238000 len:0x1000 00:08:30.568 [2024-10-15 04:32:19.910717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.568 passed 00:08:30.568 Test: blockdev nvme passthru rw ...passed 00:08:30.568 Test: blockdev nvme passthru vendor specific ...[2024-10-15 04:32:19.912036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:30.568 passed 00:08:30.568 Test: blockdev nvme admin passthru ...[2024-10-15 04:32:19.912229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.568 passed 00:08:30.568 Test: blockdev copy ...passed 00:08:30.568 Suite: bdevio tests on: Nvme2n1 00:08:30.568 Test: blockdev write read block ...passed 00:08:30.568 Test: blockdev write zeroes read block ...passed 00:08:30.568 Test: blockdev write zeroes read no split ...passed 00:08:30.568 Test: blockdev write zeroes read split ...passed 00:08:30.568 Test: blockdev write zeroes read split partial ...passed 00:08:30.568 Test: blockdev reset ...[2024-10-15 04:32:19.990419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:30.568 passed 00:08:30.568 Test: blockdev write read 8 blocks ...[2024-10-15 04:32:19.994858] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.568 passed 00:08:30.568 Test: blockdev write read size > 128k ...passed 00:08:30.568 Test: blockdev write read invalid size ...passed 00:08:30.568 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.568 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.568 Test: blockdev write read max offset ...passed 00:08:30.568 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.568 Test: blockdev writev readv 8 blocks ...passed 00:08:30.568 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.568 Test: blockdev writev readv block ...passed 00:08:30.568 Test: blockdev writev readv size > 128k ...passed 00:08:30.568 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.568 Test: blockdev comparev and writev ...[2024-10-15 04:32:20.006728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1234000 len:0x1000 00:08:30.568 passed 00:08:30.568 Test: blockdev nvme passthru rw ...[2024-10-15 04:32:20.007058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.568 passed 00:08:30.568 Test: blockdev nvme passthru vendor specific ...[2024-10-15 04:32:20.008190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:30.568 [2024-10-15 04:32:20.008307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.568 passed 00:08:30.568 Test: blockdev nvme admin passthru ...passed 00:08:30.568 Test: blockdev copy ...passed 00:08:30.568 Suite: bdevio tests on: Nvme1n1p2 00:08:30.568 Test: blockdev write read block ...passed 00:08:30.568 Test: blockdev write zeroes read block ...passed 00:08:30.568 Test: blockdev write zeroes read no split ...passed 00:08:30.568 Test: blockdev write zeroes read split ...passed 00:08:30.828 Test: blockdev write zeroes read split partial ...passed 00:08:30.828 Test: blockdev reset ...[2024-10-15 04:32:20.082692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:30.828 [2024-10-15 04:32:20.086722] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.828 passed 00:08:30.828 Test: blockdev write read 8 blocks ...passed 00:08:30.828 Test: blockdev write read size > 128k ...passed 00:08:30.828 Test: blockdev write read invalid size ...passed 00:08:30.828 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.828 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.828 Test: blockdev write read max offset ...passed 00:08:30.828 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.828 Test: blockdev writev readv 8 blocks ...passed 00:08:30.828 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.828 Test: blockdev writev readv block ...passed 00:08:30.828 Test: blockdev writev readv size > 128k ...passed 00:08:30.828 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.828 Test: blockdev comparev and writev ...[2024-10-15 04:32:20.096400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d1230000 len:0x1000 00:08:30.828 passed 00:08:30.828 Test: blockdev nvme passthru rw ...passed 00:08:30.828 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.828 Test: blockdev nvme admin passthru ...passed 00:08:30.828 Test: blockdev copy ...[2024-10-15 04:32:20.096616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.828 passed 00:08:30.828 Suite: bdevio tests on: Nvme1n1p1 00:08:30.828 Test: blockdev write read block ...passed 00:08:30.828 Test: blockdev write zeroes read block ...passed 00:08:30.828 Test: blockdev write zeroes read no split ...passed 00:08:30.828 Test: blockdev write zeroes read split ...passed 00:08:30.828 Test: blockdev write zeroes read split partial ...passed 00:08:30.828 Test: blockdev reset ...[2024-10-15 04:32:20.183968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:30.828 passed 00:08:30.828 Test: blockdev write read 8 blocks ...[2024-10-15 04:32:20.187962] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.828 passed 00:08:30.828 Test: blockdev write read size > 128k ...passed 00:08:30.828 Test: blockdev write read invalid size ...passed 00:08:30.828 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.828 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.828 Test: blockdev write read max offset ...passed 00:08:30.828 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.828 Test: blockdev writev readv 8 blocks ...passed 00:08:30.828 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.828 Test: blockdev writev readv block ...passed 00:08:30.828 Test: blockdev writev readv size > 128k ...passed 00:08:30.828 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.828 Test: blockdev comparev and writev ...[2024-10-15 04:32:20.196565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bd60e000 len:0x1000 00:08:30.828 passed 00:08:30.828 Test: blockdev nvme passthru rw ...passed 00:08:30.828 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.828 Test: blockdev nvme admin passthru ...passed 00:08:30.828 Test: blockdev copy ...[2024-10-15 04:32:20.196794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.828 passed 00:08:30.828 Suite: bdevio tests on: Nvme0n1 00:08:30.828 Test: blockdev write read block ...passed 00:08:30.828 Test: blockdev write zeroes read block ...passed 00:08:30.828 Test: blockdev write zeroes read no split ...passed 00:08:30.828 Test: blockdev write zeroes read split ...passed 00:08:30.828 Test: blockdev write zeroes read split partial ...passed 00:08:30.828 Test: blockdev reset ...[2024-10-15 04:32:20.266733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:30.828 [2024-10-15 04:32:20.270594] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.828 passed 00:08:30.828 Test: blockdev write read 8 blocks ...passed 00:08:30.828 Test: blockdev write read size > 128k ...passed 00:08:30.828 Test: blockdev write read invalid size ...passed 00:08:30.828 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.828 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.828 Test: blockdev write read max offset ...passed 00:08:30.828 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.828 Test: blockdev writev readv 8 blocks ...passed 00:08:30.828 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.828 Test: blockdev writev readv block ...passed 00:08:30.828 Test: blockdev writev readv size > 128k ...passed 00:08:30.828 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.828 Test: blockdev comparev and writev ...passed 00:08:30.828 Test: blockdev nvme passthru rw ...[2024-10-15 04:32:20.277974] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:30.828 separate metadata which is not supported yet. 00:08:30.828 passed 00:08:30.828 Test: blockdev nvme passthru vendor specific ...[2024-10-15 04:32:20.278552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:30.828 passed 00:08:30.828 Test: blockdev nvme admin passthru ...[2024-10-15 04:32:20.278754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:30.828 passed 00:08:30.828 Test: blockdev copy ...passed 00:08:30.828 00:08:30.828 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.828 suites 7 7 n/a 0 0 00:08:30.828 tests 161 161 161 0 0 00:08:30.828 asserts 1025 1025 1025 0 n/a 00:08:30.828 00:08:30.828 Elapsed time = 1.819 seconds 00:08:30.828 0 00:08:30.828 04:32:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62917 00:08:30.828 04:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 62917 ']' 00:08:30.828 04:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 62917 00:08:30.828 04:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:08:30.828 04:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.828 04:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62917 00:08:31.087 04:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.087 04:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.087 04:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62917' 00:08:31.087 killing process with pid 62917 00:08:31.087 04:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 62917 00:08:31.087 04:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 62917 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:32.108 00:08:32.108 real 0m2.983s 00:08:32.108 user 0m7.686s 00:08:32.108 sys 0m0.426s 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:32.108 ************************************ 00:08:32.108 END TEST bdev_bounds 00:08:32.108 ************************************ 00:08:32.108 04:32:21 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:32.108 04:32:21 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:32.108 04:32:21 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.108 04:32:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:32.108 ************************************ 00:08:32.108 START TEST bdev_nbd 00:08:32.108 ************************************ 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62982 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62982 /var/tmp/spdk-nbd.sock 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 62982 ']' 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:32.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:32.108 04:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:32.367 [2024-10-15 04:32:21.618619] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:08:32.367 [2024-10-15 04:32:21.619069] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.367 [2024-10-15 04:32:21.803102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.678 [2024-10-15 04:32:21.922218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:33.246 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:33.504 1+0 records in 00:08:33.504 1+0 records out 00:08:33.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048568 s, 8.4 MB/s 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:33.504 04:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:33.763 1+0 records in 00:08:33.763 1+0 records out 00:08:33.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569364 s, 7.2 MB/s 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:33.763 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.021 1+0 records in 00:08:34.021 1+0 records out 00:08:34.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138705 s, 3.0 MB/s 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:34.021 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:34.279 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.279 1+0 records in 00:08:34.279 1+0 records out 00:08:34.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644425 s, 6.4 MB/s 00:08:34.535 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.535 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:34.535 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.535 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:34.535 04:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:34.535 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.535 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:34.535 04:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:34.535 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:34.535 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:34.535 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.794 1+0 records in 00:08:34.794 1+0 records out 00:08:34.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117458 s, 3.5 MB/s 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:34.794 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:35.053 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:35.053 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:35.053 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:35.053 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:35.053 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:35.053 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:35.053 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:35.053 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:35.054 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:35.054 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.054 1+0 records in 00:08:35.054 1+0 records out 00:08:35.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109959 s, 3.7 MB/s 00:08:35.054 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.054 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:35.054 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.054 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:35.054 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:35.054 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.054 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:35.054 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.315 1+0 records in 00:08:35.315 1+0 records out 00:08:35.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618103 s, 6.6 MB/s 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:35.315 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.574 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd0", 00:08:35.574 "bdev_name": "Nvme0n1" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd1", 00:08:35.574 "bdev_name": "Nvme1n1p1" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd2", 00:08:35.574 "bdev_name": "Nvme1n1p2" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd3", 00:08:35.574 "bdev_name": "Nvme2n1" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd4", 00:08:35.574 "bdev_name": "Nvme2n2" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd5", 00:08:35.574 "bdev_name": "Nvme2n3" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd6", 00:08:35.574 "bdev_name": "Nvme3n1" 00:08:35.574 } 00:08:35.574 ]' 00:08:35.574 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:35.574 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd0", 00:08:35.574 "bdev_name": "Nvme0n1" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd1", 00:08:35.574 "bdev_name": "Nvme1n1p1" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd2", 00:08:35.574 "bdev_name": "Nvme1n1p2" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd3", 00:08:35.574 "bdev_name": "Nvme2n1" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd4", 00:08:35.574 "bdev_name": "Nvme2n2" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd5", 00:08:35.574 "bdev_name": "Nvme2n3" 00:08:35.574 }, 00:08:35.574 { 00:08:35.574 "nbd_device": "/dev/nbd6", 00:08:35.574 "bdev_name": "Nvme3n1" 00:08:35.574 } 00:08:35.574 ]' 00:08:35.574 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:35.574 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:35.574 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.574 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:35.574 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:35.574 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:35.574 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.574 04:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:35.833 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:35.833 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:35.833 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:35.833 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.833 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.833 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:35.833 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:35.833 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.833 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.833 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:35.833 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.092 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:36.402 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:36.402 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:36.402 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:36.402 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.402 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.402 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:36.402 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.402 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.402 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.402 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:36.668 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:36.668 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:36.668 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:36.668 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.668 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.668 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:36.668 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.668 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.668 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.668 04:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.926 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.247 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:37.247 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.247 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:37.247 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:37.247 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:37.247 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:37.506 04:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:37.506 /dev/nbd0 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.763 1+0 records in 00:08:37.763 1+0 records out 00:08:37.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646018 s, 6.3 MB/s 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:37.763 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:37.763 /dev/nbd1 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.022 1+0 records in 00:08:38.022 1+0 records out 00:08:38.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623339 s, 6.6 MB/s 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:38.022 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:38.022 /dev/nbd10 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.280 1+0 records in 00:08:38.280 1+0 records out 00:08:38.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492971 s, 8.3 MB/s 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:38.280 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:38.538 /dev/nbd11 00:08:38.538 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:38.538 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:38.538 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:38.538 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:38.538 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:38.538 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:38.538 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.539 1+0 records in 00:08:38.539 1+0 records out 00:08:38.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591482 s, 6.9 MB/s 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:38.539 04:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:38.797 /dev/nbd12 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.797 1+0 records in 00:08:38.797 1+0 records out 00:08:38.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790755 s, 5.2 MB/s 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:38.797 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:39.055 /dev/nbd13 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:39.055 1+0 records in 00:08:39.055 1+0 records out 00:08:39.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075306 s, 5.4 MB/s 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:39.055 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:39.314 /dev/nbd14 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:39.314 1+0 records in 00:08:39.314 1+0 records out 00:08:39.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626701 s, 6.5 MB/s 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.314 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:39.573 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd0", 00:08:39.573 "bdev_name": "Nvme0n1" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd1", 00:08:39.573 "bdev_name": "Nvme1n1p1" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd10", 00:08:39.573 "bdev_name": "Nvme1n1p2" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd11", 00:08:39.573 "bdev_name": "Nvme2n1" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd12", 00:08:39.573 "bdev_name": "Nvme2n2" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd13", 00:08:39.573 "bdev_name": "Nvme2n3" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd14", 00:08:39.573 "bdev_name": "Nvme3n1" 00:08:39.573 } 00:08:39.573 ]' 00:08:39.573 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:39.573 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd0", 00:08:39.573 "bdev_name": "Nvme0n1" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd1", 00:08:39.573 "bdev_name": "Nvme1n1p1" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd10", 00:08:39.573 "bdev_name": "Nvme1n1p2" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd11", 00:08:39.573 "bdev_name": "Nvme2n1" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd12", 00:08:39.573 "bdev_name": "Nvme2n2" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd13", 00:08:39.573 "bdev_name": "Nvme2n3" 00:08:39.573 }, 00:08:39.573 { 00:08:39.573 "nbd_device": "/dev/nbd14", 00:08:39.573 "bdev_name": "Nvme3n1" 00:08:39.573 } 00:08:39.573 ]' 00:08:39.573 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:39.573 /dev/nbd1 00:08:39.573 /dev/nbd10 00:08:39.573 /dev/nbd11 00:08:39.573 /dev/nbd12 00:08:39.573 /dev/nbd13 00:08:39.573 /dev/nbd14' 00:08:39.573 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:39.573 /dev/nbd1 00:08:39.573 /dev/nbd10 00:08:39.573 /dev/nbd11 00:08:39.573 /dev/nbd12 00:08:39.573 /dev/nbd13 00:08:39.573 /dev/nbd14' 00:08:39.573 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:39.573 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:39.574 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:39.574 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:39.574 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:39.574 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:39.574 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:39.574 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:39.574 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:39.574 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:39.574 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:39.574 04:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:39.574 256+0 records in 00:08:39.574 256+0 records out 00:08:39.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00667808 s, 157 MB/s 00:08:39.574 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.574 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:39.832 256+0 records in 00:08:39.832 256+0 records out 00:08:39.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13032 s, 8.0 MB/s 00:08:39.832 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.832 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:39.832 256+0 records in 00:08:39.832 256+0 records out 00:08:39.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141679 s, 7.4 MB/s 00:08:39.832 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.832 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:40.091 256+0 records in 00:08:40.091 256+0 records out 00:08:40.091 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139569 s, 7.5 MB/s 00:08:40.091 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.091 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:40.091 256+0 records in 00:08:40.091 256+0 records out 00:08:40.091 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138521 s, 7.6 MB/s 00:08:40.091 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.091 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:40.349 256+0 records in 00:08:40.349 256+0 records out 00:08:40.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139394 s, 7.5 MB/s 00:08:40.349 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.349 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:40.608 256+0 records in 00:08:40.608 256+0 records out 00:08:40.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139391 s, 7.5 MB/s 00:08:40.608 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.608 04:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:40.608 256+0 records in 00:08:40.608 256+0 records out 00:08:40.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138582 s, 7.6 MB/s 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:40.608 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:40.609 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:40.609 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.609 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:40.609 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:40.609 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:40.609 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.609 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:40.868 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:40.868 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:40.868 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:40.868 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.868 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.868 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:40.868 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:40.868 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.868 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.868 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:41.126 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:41.126 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:41.126 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:41.126 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.126 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.126 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:41.126 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:41.126 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.126 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.126 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:41.385 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:41.385 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:41.385 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:41.385 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.385 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.385 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:41.385 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:41.385 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.385 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.385 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:41.644 04:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:41.644 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:41.644 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:41.644 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.644 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.644 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:41.644 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:41.644 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.644 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.644 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:41.902 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:41.902 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:41.902 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:41.902 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.902 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.902 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:41.902 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:41.902 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.902 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.902 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:42.160 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:42.160 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:42.160 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:42.160 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.160 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.160 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:42.160 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:42.160 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.160 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.160 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:42.419 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:42.678 04:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:42.964 malloc_lvol_verify 00:08:42.964 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:42.964 48c6d493-8b1d-4fb8-be97-0202e4bd99bc 00:08:42.964 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:43.222 b2dda9e6-a3a9-4afd-ace5-fee0312134ac 00:08:43.222 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:43.480 /dev/nbd0 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:43.480 mke2fs 1.47.0 (5-Feb-2023) 00:08:43.480 Discarding device blocks: 0/4096 done 00:08:43.480 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:43.480 00:08:43.480 Allocating group tables: 0/1 done 00:08:43.480 Writing inode tables: 0/1 done 00:08:43.480 Creating journal (1024 blocks): done 00:08:43.480 Writing superblocks and filesystem accounting information: 0/1 done 00:08:43.480 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.480 04:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62982 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 62982 ']' 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 62982 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62982 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62982' 00:08:43.740 killing process with pid 62982 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 62982 00:08:43.740 04:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 62982 00:08:45.116 04:32:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:45.116 00:08:45.116 real 0m12.884s 00:08:45.116 user 0m16.783s 00:08:45.116 sys 0m5.479s 00:08:45.116 04:32:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.116 04:32:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:45.116 ************************************ 00:08:45.116 END TEST bdev_nbd 00:08:45.116 ************************************ 00:08:45.116 04:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:45.116 04:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:45.116 04:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:45.116 skipping fio tests on NVMe due to multi-ns failures. 00:08:45.116 04:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:45.116 04:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:45.116 04:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:45.116 04:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:45.116 04:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.116 04:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:45.116 ************************************ 00:08:45.116 START TEST bdev_verify 00:08:45.116 ************************************ 00:08:45.116 04:32:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:45.116 [2024-10-15 04:32:34.541679] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:08:45.116 [2024-10-15 04:32:34.541826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63407 ] 00:08:45.375 [2024-10-15 04:32:34.715986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:45.375 [2024-10-15 04:32:34.837851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.375 [2024-10-15 04:32:34.837911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.392 Running I/O for 5 seconds... 00:08:48.705 20800.00 IOPS, 81.25 MiB/s [2024-10-15T04:32:39.143Z] 21312.00 IOPS, 83.25 MiB/s [2024-10-15T04:32:40.167Z] 21653.33 IOPS, 84.58 MiB/s [2024-10-15T04:32:40.735Z] 21056.00 IOPS, 82.25 MiB/s [2024-10-15T04:32:40.735Z] 21017.60 IOPS, 82.10 MiB/s 00:08:51.231 Latency(us) 00:08:51.231 [2024-10-15T04:32:40.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.231 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x0 length 0xbd0bd 00:08:51.231 Nvme0n1 : 5.04 1471.63 5.75 0.00 0.00 86680.54 19581.84 86328.55 00:08:51.231 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:51.231 Nvme0n1 : 5.04 1472.77 5.75 0.00 0.00 86600.96 21161.02 91381.92 00:08:51.231 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x0 length 0x4ff80 00:08:51.231 Nvme1n1p1 : 5.05 1471.18 5.75 0.00 0.00 86547.58 22108.53 81696.28 00:08:51.231 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:51.231 Nvme1n1p1 : 5.07 1476.83 5.77 0.00 0.00 86163.12 12949.28 88434.12 00:08:51.231 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x0 length 0x4ff7f 00:08:51.231 Nvme1n1p2 : 5.07 1476.35 5.77 0.00 0.00 86007.51 9790.92 76221.79 00:08:51.231 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:51.231 Nvme1n1p2 : 5.07 1476.37 5.77 0.00 0.00 85970.63 12317.61 87170.78 00:08:51.231 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x0 length 0x80000 00:08:51.231 Nvme2n1 : 5.07 1475.59 5.76 0.00 0.00 85881.97 11001.63 73695.10 00:08:51.231 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x80000 length 0x80000 00:08:51.231 Nvme2n1 : 5.09 1484.06 5.80 0.00 0.00 85619.49 13423.04 86749.66 00:08:51.231 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x0 length 0x80000 00:08:51.231 Nvme2n2 : 5.09 1484.23 5.80 0.00 0.00 85426.78 10422.59 75800.67 00:08:51.231 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x80000 length 0x80000 00:08:51.231 Nvme2n2 : 5.09 1483.62 5.80 0.00 0.00 85485.21 13791.51 86749.66 00:08:51.231 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x0 length 0x80000 00:08:51.231 Nvme2n3 : 5.09 1483.86 5.80 0.00 0.00 85282.16 10369.95 78327.36 00:08:51.231 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x80000 length 0x80000 00:08:51.231 Nvme2n3 : 5.09 1483.18 5.79 0.00 0.00 85352.65 13580.95 88013.01 00:08:51.231 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x0 length 0x20000 00:08:51.231 Nvme3n1 : 5.09 1483.40 5.79 0.00 0.00 85168.26 10264.67 80011.82 00:08:51.231 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.231 Verification LBA range: start 0x20000 length 0x20000 00:08:51.231 Nvme3n1 : 5.09 1482.78 5.79 0.00 0.00 85230.99 13159.84 90539.69 00:08:51.231 [2024-10-15T04:32:40.735Z] =================================================================================================================== 00:08:51.231 [2024-10-15T04:32:40.735Z] Total : 20705.84 80.88 0.00 0.00 85812.19 9790.92 91381.92 00:08:53.136 00:08:53.136 real 0m7.827s 00:08:53.136 user 0m14.430s 00:08:53.136 sys 0m0.336s 00:08:53.136 04:32:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.136 04:32:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:53.136 ************************************ 00:08:53.136 END TEST bdev_verify 00:08:53.136 ************************************ 00:08:53.136 04:32:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:53.136 04:32:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:53.136 04:32:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.136 04:32:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:53.136 ************************************ 00:08:53.136 START TEST bdev_verify_big_io 00:08:53.136 ************************************ 00:08:53.136 04:32:42 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:53.136 [2024-10-15 04:32:42.432353] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:08:53.136 [2024-10-15 04:32:42.432484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63511 ] 00:08:53.136 [2024-10-15 04:32:42.604020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:53.394 [2024-10-15 04:32:42.723272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.394 [2024-10-15 04:32:42.723317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.331 Running I/O for 5 seconds... 00:08:58.168 16.00 IOPS, 1.00 MiB/s [2024-10-15T04:32:49.049Z] 2048.50 IOPS, 128.03 MiB/s [2024-10-15T04:32:49.615Z] 2587.33 IOPS, 161.71 MiB/s [2024-10-15T04:32:49.615Z] 2877.25 IOPS, 179.83 MiB/s 00:09:00.111 Latency(us) 00:09:00.111 [2024-10-15T04:32:49.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.111 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x0 length 0xbd0b 00:09:00.111 Nvme0n1 : 5.72 130.86 8.18 0.00 0.00 921320.99 25372.17 1078054.04 00:09:00.111 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:00.111 Nvme0n1 : 5.74 134.40 8.40 0.00 0.00 904912.88 36005.32 1003937.82 00:09:00.111 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x0 length 0x4ff8 00:09:00.111 Nvme1n1p1 : 5.59 137.42 8.59 0.00 0.00 874052.40 67378.38 815278.37 00:09:00.111 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:00.111 Nvme1n1p1 : 5.62 136.67 8.54 0.00 0.00 885495.98 96856.42 848967.56 00:09:00.111 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x0 length 0x4ff7 00:09:00.111 Nvme1n1p2 : 5.72 138.47 8.65 0.00 0.00 841867.09 122123.31 734424.31 00:09:00.111 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:00.111 Nvme1n1p2 : 5.74 137.45 8.59 0.00 0.00 853415.68 118754.39 727686.48 00:09:00.111 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x0 length 0x8000 00:09:00.111 Nvme2n1 : 5.84 149.09 9.32 0.00 0.00 773671.04 42322.04 751268.91 00:09:00.111 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x8000 length 0x8000 00:09:00.111 Nvme2n1 : 5.81 143.16 8.95 0.00 0.00 807070.93 67378.38 727686.48 00:09:00.111 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x0 length 0x8000 00:09:00.111 Nvme2n2 : 5.84 149.40 9.34 0.00 0.00 752556.49 42532.60 771482.42 00:09:00.111 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x8000 length 0x8000 00:09:00.111 Nvme2n2 : 5.86 146.24 9.14 0.00 0.00 773072.48 26424.96 852336.48 00:09:00.111 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x0 length 0x8000 00:09:00.111 Nvme2n3 : 5.85 153.28 9.58 0.00 0.00 717911.30 34531.42 795064.85 00:09:00.111 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x8000 length 0x8000 00:09:00.111 Nvme2n3 : 5.87 145.17 9.07 0.00 0.00 762687.57 20002.96 1421683.77 00:09:00.111 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x0 length 0x2000 00:09:00.111 Nvme3n1 : 5.90 172.80 10.80 0.00 0.00 623566.80 934.35 929821.61 00:09:00.111 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.111 Verification LBA range: start 0x2000 length 0x2000 00:09:00.111 Nvme3n1 : 5.87 153.55 9.60 0.00 0.00 703859.74 3000.44 1455372.95 00:09:00.111 [2024-10-15T04:32:49.615Z] =================================================================================================================== 00:09:00.111 [2024-10-15T04:32:49.615Z] Total : 2027.95 126.75 0.00 0.00 792850.79 934.35 1455372.95 00:09:02.704 00:09:02.704 real 0m9.245s 00:09:02.704 user 0m17.282s 00:09:02.704 sys 0m0.356s 00:09:02.704 04:32:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.704 04:32:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:02.704 ************************************ 00:09:02.704 END TEST bdev_verify_big_io 00:09:02.704 ************************************ 00:09:02.704 04:32:51 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:02.704 04:32:51 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:02.704 04:32:51 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.704 04:32:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:02.704 ************************************ 00:09:02.704 START TEST bdev_write_zeroes 00:09:02.704 ************************************ 00:09:02.704 04:32:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:02.704 [2024-10-15 04:32:51.737524] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:09:02.704 [2024-10-15 04:32:51.737653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63631 ] 00:09:02.704 [2024-10-15 04:32:51.911363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.704 [2024-10-15 04:32:52.025806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.271 Running I/O for 1 seconds... 00:09:04.493 66752.00 IOPS, 260.75 MiB/s 00:09:04.493 Latency(us) 00:09:04.493 [2024-10-15T04:32:53.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.493 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:04.493 Nvme0n1 : 1.02 9503.75 37.12 0.00 0.00 13437.85 11422.74 34741.98 00:09:04.493 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:04.493 Nvme1n1p1 : 1.02 9493.01 37.08 0.00 0.00 13434.81 11791.22 34741.98 00:09:04.493 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:04.493 Nvme1n1p2 : 1.03 9483.11 37.04 0.00 0.00 13380.88 11317.46 32425.84 00:09:04.493 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:04.493 Nvme2n1 : 1.03 9474.51 37.01 0.00 0.00 13338.28 11528.02 29899.16 00:09:04.493 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:04.493 Nvme2n2 : 1.03 9465.98 36.98 0.00 0.00 13311.55 11580.66 28846.37 00:09:04.493 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:04.493 Nvme2n3 : 1.03 9456.61 36.94 0.00 0.00 13281.98 10633.15 26424.96 00:09:04.493 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:04.493 Nvme3n1 : 1.03 9506.51 37.13 0.00 0.00 13206.78 6527.28 24319.38 00:09:04.493 [2024-10-15T04:32:53.997Z] =================================================================================================================== 00:09:04.493 [2024-10-15T04:32:53.997Z] Total : 66383.48 259.31 0.00 0.00 13341.61 6527.28 34741.98 00:09:05.431 00:09:05.431 real 0m3.285s 00:09:05.431 user 0m2.905s 00:09:05.431 sys 0m0.260s 00:09:05.431 04:32:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.431 04:32:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:05.431 ************************************ 00:09:05.431 END TEST bdev_write_zeroes 00:09:05.431 ************************************ 00:09:05.689 04:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:05.689 04:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:05.689 04:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.690 04:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:05.690 ************************************ 00:09:05.690 START TEST bdev_json_nonenclosed 00:09:05.690 ************************************ 00:09:05.690 04:32:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:05.690 [2024-10-15 04:32:55.104123] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:09:05.690 [2024-10-15 04:32:55.104276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63684 ] 00:09:05.948 [2024-10-15 04:32:55.276774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.948 [2024-10-15 04:32:55.393160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.948 [2024-10-15 04:32:55.393262] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:05.948 [2024-10-15 04:32:55.393284] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:05.948 [2024-10-15 04:32:55.393296] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.207 00:09:06.207 real 0m0.638s 00:09:06.207 user 0m0.399s 00:09:06.207 sys 0m0.135s 00:09:06.207 04:32:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.207 04:32:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:06.207 ************************************ 00:09:06.207 END TEST bdev_json_nonenclosed 00:09:06.207 ************************************ 00:09:06.207 04:32:55 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:06.207 04:32:55 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:06.207 04:32:55 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.207 04:32:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.466 ************************************ 00:09:06.466 START TEST bdev_json_nonarray 00:09:06.466 ************************************ 00:09:06.466 04:32:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:06.466 [2024-10-15 04:32:55.813581] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:09:06.466 [2024-10-15 04:32:55.813708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63710 ] 00:09:06.725 [2024-10-15 04:32:55.987124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.725 [2024-10-15 04:32:56.099998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.725 [2024-10-15 04:32:56.100102] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:06.725 [2024-10-15 04:32:56.100125] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:06.725 [2024-10-15 04:32:56.100137] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.984 00:09:06.984 real 0m0.640s 00:09:06.984 user 0m0.395s 00:09:06.984 sys 0m0.140s 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:06.984 ************************************ 00:09:06.984 END TEST bdev_json_nonarray 00:09:06.984 ************************************ 00:09:06.984 04:32:56 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:06.984 04:32:56 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:06.984 04:32:56 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:06.984 04:32:56 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.984 04:32:56 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.984 04:32:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.984 ************************************ 00:09:06.984 START TEST bdev_gpt_uuid 00:09:06.984 ************************************ 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63735 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63735 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 63735 ']' 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.984 04:32:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:07.247 [2024-10-15 04:32:56.545139] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:09:07.247 [2024-10-15 04:32:56.545271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63735 ] 00:09:07.247 [2024-10-15 04:32:56.705206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.513 [2024-10-15 04:32:56.821318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.468 04:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.468 04:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:09:08.468 04:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:08.468 04:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.468 04:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:08.727 Some configs were skipped because the RPC state that can call them passed over. 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:08.727 { 00:09:08.727 "name": "Nvme1n1p1", 00:09:08.727 "aliases": [ 00:09:08.727 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:08.727 ], 00:09:08.727 "product_name": "GPT Disk", 00:09:08.727 "block_size": 4096, 00:09:08.727 "num_blocks": 655104, 00:09:08.727 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:08.727 "assigned_rate_limits": { 00:09:08.727 "rw_ios_per_sec": 0, 00:09:08.727 "rw_mbytes_per_sec": 0, 00:09:08.727 "r_mbytes_per_sec": 0, 00:09:08.727 "w_mbytes_per_sec": 0 00:09:08.727 }, 00:09:08.727 "claimed": false, 00:09:08.727 "zoned": false, 00:09:08.727 "supported_io_types": { 00:09:08.727 "read": true, 00:09:08.727 "write": true, 00:09:08.727 "unmap": true, 00:09:08.727 "flush": true, 00:09:08.727 "reset": true, 00:09:08.727 "nvme_admin": false, 00:09:08.727 "nvme_io": false, 00:09:08.727 "nvme_io_md": false, 00:09:08.727 "write_zeroes": true, 00:09:08.727 "zcopy": false, 00:09:08.727 "get_zone_info": false, 00:09:08.727 "zone_management": false, 00:09:08.727 "zone_append": false, 00:09:08.727 "compare": true, 00:09:08.727 "compare_and_write": false, 00:09:08.727 "abort": true, 00:09:08.727 "seek_hole": false, 00:09:08.727 "seek_data": false, 00:09:08.727 "copy": true, 00:09:08.727 "nvme_iov_md": false 00:09:08.727 }, 00:09:08.727 "driver_specific": { 00:09:08.727 "gpt": { 00:09:08.727 "base_bdev": "Nvme1n1", 00:09:08.727 "offset_blocks": 256, 00:09:08.727 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:08.727 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:08.727 "partition_name": "SPDK_TEST_first" 00:09:08.727 } 00:09:08.727 } 00:09:08.727 } 00:09:08.727 ]' 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.727 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:08.985 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.985 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:08.985 { 00:09:08.985 "name": "Nvme1n1p2", 00:09:08.985 "aliases": [ 00:09:08.985 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:08.985 ], 00:09:08.985 "product_name": "GPT Disk", 00:09:08.986 "block_size": 4096, 00:09:08.986 "num_blocks": 655103, 00:09:08.986 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:08.986 "assigned_rate_limits": { 00:09:08.986 "rw_ios_per_sec": 0, 00:09:08.986 "rw_mbytes_per_sec": 0, 00:09:08.986 "r_mbytes_per_sec": 0, 00:09:08.986 "w_mbytes_per_sec": 0 00:09:08.986 }, 00:09:08.986 "claimed": false, 00:09:08.986 "zoned": false, 00:09:08.986 "supported_io_types": { 00:09:08.986 "read": true, 00:09:08.986 "write": true, 00:09:08.986 "unmap": true, 00:09:08.986 "flush": true, 00:09:08.986 "reset": true, 00:09:08.986 "nvme_admin": false, 00:09:08.986 "nvme_io": false, 00:09:08.986 "nvme_io_md": false, 00:09:08.986 "write_zeroes": true, 00:09:08.986 "zcopy": false, 00:09:08.986 "get_zone_info": false, 00:09:08.986 "zone_management": false, 00:09:08.986 "zone_append": false, 00:09:08.986 "compare": true, 00:09:08.986 "compare_and_write": false, 00:09:08.986 "abort": true, 00:09:08.986 "seek_hole": false, 00:09:08.986 "seek_data": false, 00:09:08.986 "copy": true, 00:09:08.986 "nvme_iov_md": false 00:09:08.986 }, 00:09:08.986 "driver_specific": { 00:09:08.986 "gpt": { 00:09:08.986 "base_bdev": "Nvme1n1", 00:09:08.986 "offset_blocks": 655360, 00:09:08.986 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:08.986 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:08.986 "partition_name": "SPDK_TEST_second" 00:09:08.986 } 00:09:08.986 } 00:09:08.986 } 00:09:08.986 ]' 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63735 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 63735 ']' 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 63735 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63735 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.986 killing process with pid 63735 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63735' 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 63735 00:09:08.986 04:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 63735 00:09:11.517 00:09:11.517 real 0m4.390s 00:09:11.517 user 0m4.483s 00:09:11.517 sys 0m0.554s 00:09:11.517 04:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.517 ************************************ 00:09:11.517 END TEST bdev_gpt_uuid 00:09:11.517 ************************************ 00:09:11.517 04:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:11.517 04:33:00 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:11.517 04:33:00 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:11.517 04:33:00 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:11.517 04:33:00 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:11.517 04:33:00 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:11.517 04:33:00 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:11.517 04:33:00 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:11.517 04:33:00 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:11.517 04:33:00 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:12.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:12.344 Waiting for block devices as requested 00:09:12.344 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.602 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.602 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.602 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:17.872 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:17.872 04:33:07 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:17.872 04:33:07 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:18.131 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:18.131 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:18.131 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:18.131 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:18.131 04:33:07 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:18.131 00:09:18.131 real 1m5.941s 00:09:18.131 user 1m22.286s 00:09:18.131 sys 0m12.270s 00:09:18.131 04:33:07 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.131 04:33:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:18.131 ************************************ 00:09:18.131 END TEST blockdev_nvme_gpt 00:09:18.131 ************************************ 00:09:18.131 04:33:07 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:18.131 04:33:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:18.131 04:33:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.131 04:33:07 -- common/autotest_common.sh@10 -- # set +x 00:09:18.131 ************************************ 00:09:18.131 START TEST nvme 00:09:18.131 ************************************ 00:09:18.131 04:33:07 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:18.390 * Looking for test storage... 00:09:18.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:18.390 04:33:07 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:18.390 04:33:07 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:09:18.390 04:33:07 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:18.390 04:33:07 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:18.390 04:33:07 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.390 04:33:07 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.390 04:33:07 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.390 04:33:07 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.390 04:33:07 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.390 04:33:07 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.390 04:33:07 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.390 04:33:07 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.390 04:33:07 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.390 04:33:07 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.390 04:33:07 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.390 04:33:07 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:18.390 04:33:07 nvme -- scripts/common.sh@345 -- # : 1 00:09:18.390 04:33:07 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.390 04:33:07 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.390 04:33:07 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:18.390 04:33:07 nvme -- scripts/common.sh@353 -- # local d=1 00:09:18.390 04:33:07 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.390 04:33:07 nvme -- scripts/common.sh@355 -- # echo 1 00:09:18.390 04:33:07 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.390 04:33:07 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:18.390 04:33:07 nvme -- scripts/common.sh@353 -- # local d=2 00:09:18.390 04:33:07 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.390 04:33:07 nvme -- scripts/common.sh@355 -- # echo 2 00:09:18.390 04:33:07 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.390 04:33:07 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.390 04:33:07 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.390 04:33:07 nvme -- scripts/common.sh@368 -- # return 0 00:09:18.390 04:33:07 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.390 04:33:07 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:18.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.390 --rc genhtml_branch_coverage=1 00:09:18.390 --rc genhtml_function_coverage=1 00:09:18.390 --rc genhtml_legend=1 00:09:18.390 --rc geninfo_all_blocks=1 00:09:18.390 --rc geninfo_unexecuted_blocks=1 00:09:18.390 00:09:18.390 ' 00:09:18.390 04:33:07 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:18.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.390 --rc genhtml_branch_coverage=1 00:09:18.390 --rc genhtml_function_coverage=1 00:09:18.390 --rc genhtml_legend=1 00:09:18.390 --rc geninfo_all_blocks=1 00:09:18.390 --rc geninfo_unexecuted_blocks=1 00:09:18.390 00:09:18.390 ' 00:09:18.390 04:33:07 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:18.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.390 --rc genhtml_branch_coverage=1 00:09:18.390 --rc genhtml_function_coverage=1 00:09:18.390 --rc genhtml_legend=1 00:09:18.390 --rc geninfo_all_blocks=1 00:09:18.390 --rc geninfo_unexecuted_blocks=1 00:09:18.390 00:09:18.390 ' 00:09:18.390 04:33:07 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:18.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.390 --rc genhtml_branch_coverage=1 00:09:18.390 --rc genhtml_function_coverage=1 00:09:18.390 --rc genhtml_legend=1 00:09:18.390 --rc geninfo_all_blocks=1 00:09:18.390 --rc geninfo_unexecuted_blocks=1 00:09:18.390 00:09:18.390 ' 00:09:18.390 04:33:07 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:18.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.901 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.901 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.901 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.901 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.901 04:33:09 nvme -- nvme/nvme.sh@79 -- # uname 00:09:19.901 04:33:09 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:19.901 04:33:09 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:19.901 04:33:09 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:19.901 04:33:09 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:19.901 04:33:09 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:09:19.901 04:33:09 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:09:19.901 04:33:09 nvme -- common/autotest_common.sh@1071 -- # stubpid=64402 00:09:19.901 04:33:09 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:19.901 Waiting for stub to ready for secondary processes... 00:09:19.901 04:33:09 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:09:19.901 04:33:09 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:19.901 04:33:09 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64402 ]] 00:09:19.901 04:33:09 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:09:19.901 [2024-10-15 04:33:09.389910] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:09:19.901 [2024-10-15 04:33:09.390039] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:20.837 04:33:10 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:20.837 04:33:10 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64402 ]] 00:09:20.837 04:33:10 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:09:21.095 [2024-10-15 04:33:10.392844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:21.095 [2024-10-15 04:33:10.503451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.095 [2024-10-15 04:33:10.503589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.095 [2024-10-15 04:33:10.503633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.095 [2024-10-15 04:33:10.521114] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:21.095 [2024-10-15 04:33:10.521151] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:21.095 [2024-10-15 04:33:10.527898] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:21.095 [2024-10-15 04:33:10.528039] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:21.095 [2024-10-15 04:33:10.530794] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:21.095 [2024-10-15 04:33:10.531065] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:21.095 [2024-10-15 04:33:10.531159] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:21.095 [2024-10-15 04:33:10.533653] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:21.095 [2024-10-15 04:33:10.533828] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:21.095 [2024-10-15 04:33:10.533903] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:21.095 [2024-10-15 04:33:10.536704] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:21.095 [2024-10-15 04:33:10.536996] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:21.095 [2024-10-15 04:33:10.537067] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:21.095 [2024-10-15 04:33:10.537123] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:21.095 [2024-10-15 04:33:10.537174] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:22.030 done. 00:09:22.030 04:33:11 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:22.030 04:33:11 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:09:22.030 04:33:11 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:22.030 04:33:11 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:09:22.030 04:33:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.030 04:33:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.030 ************************************ 00:09:22.030 START TEST nvme_reset 00:09:22.030 ************************************ 00:09:22.030 04:33:11 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:22.289 Initializing NVMe Controllers 00:09:22.289 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:22.289 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:22.289 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:22.289 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:22.289 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:22.289 00:09:22.289 real 0m0.291s 00:09:22.289 user 0m0.101s 00:09:22.289 sys 0m0.141s 00:09:22.289 04:33:11 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.289 04:33:11 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:22.289 ************************************ 00:09:22.289 END TEST nvme_reset 00:09:22.289 ************************************ 00:09:22.289 04:33:11 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:22.289 04:33:11 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.289 04:33:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.289 04:33:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.289 ************************************ 00:09:22.289 START TEST nvme_identify 00:09:22.289 ************************************ 00:09:22.289 04:33:11 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:09:22.289 04:33:11 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:22.289 04:33:11 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:22.289 04:33:11 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:22.289 04:33:11 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:22.289 04:33:11 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:22.289 04:33:11 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:09:22.289 04:33:11 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:22.289 04:33:11 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:22.289 04:33:11 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:22.551 04:33:11 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:22.551 04:33:11 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:22.551 04:33:11 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:22.551 [2024-10-15 04:33:12.039643] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 64436 terminated unexpected 00:09:22.551 ===================================================== 00:09:22.551 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.551 ===================================================== 00:09:22.551 Controller Capabilities/Features 00:09:22.551 ================================ 00:09:22.551 Vendor ID: 1b36 00:09:22.551 Subsystem Vendor ID: 1af4 00:09:22.551 Serial Number: 12340 00:09:22.551 Model Number: QEMU NVMe Ctrl 00:09:22.551 Firmware Version: 8.0.0 00:09:22.551 Recommended Arb Burst: 6 00:09:22.551 IEEE OUI Identifier: 00 54 52 00:09:22.551 Multi-path I/O 00:09:22.551 May have multiple subsystem ports: No 00:09:22.551 May have multiple controllers: No 00:09:22.551 Associated with SR-IOV VF: No 00:09:22.551 Max Data Transfer Size: 524288 00:09:22.551 Max Number of Namespaces: 256 00:09:22.551 Max Number of I/O Queues: 64 00:09:22.551 NVMe Specification Version (VS): 1.4 00:09:22.551 NVMe Specification Version (Identify): 1.4 00:09:22.551 Maximum Queue Entries: 2048 00:09:22.551 Contiguous Queues Required: Yes 00:09:22.551 Arbitration Mechanisms Supported 00:09:22.551 Weighted Round Robin: Not Supported 00:09:22.551 Vendor Specific: Not Supported 00:09:22.551 Reset Timeout: 7500 ms 00:09:22.551 Doorbell Stride: 4 bytes 00:09:22.551 NVM Subsystem Reset: Not Supported 00:09:22.551 Command Sets Supported 00:09:22.551 NVM Command Set: Supported 00:09:22.551 Boot Partition: Not Supported 00:09:22.551 Memory Page Size Minimum: 4096 bytes 00:09:22.551 Memory Page Size Maximum: 65536 bytes 00:09:22.551 Persistent Memory Region: Not Supported 00:09:22.551 Optional Asynchronous Events Supported 00:09:22.551 Namespace Attribute Notices: Supported 00:09:22.551 Firmware Activation Notices: Not Supported 00:09:22.551 ANA Change Notices: Not Supported 00:09:22.551 PLE Aggregate Log Change Notices: Not Supported 00:09:22.551 LBA Status Info Alert Notices: Not Supported 00:09:22.551 EGE Aggregate Log Change Notices: Not Supported 00:09:22.551 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.551 Zone Descriptor Change Notices: Not Supported 00:09:22.551 Discovery Log Change Notices: Not Supported 00:09:22.551 Controller Attributes 00:09:22.551 128-bit Host Identifier: Not Supported 00:09:22.551 Non-Operational Permissive Mode: Not Supported 00:09:22.551 NVM Sets: Not Supported 00:09:22.551 Read Recovery Levels: Not Supported 00:09:22.551 Endurance Groups: Not Supported 00:09:22.551 Predictable Latency Mode: Not Supported 00:09:22.551 Traffic Based Keep ALive: Not Supported 00:09:22.551 Namespace Granularity: Not Supported 00:09:22.551 SQ Associations: Not Supported 00:09:22.551 UUID List: Not Supported 00:09:22.551 Multi-Domain Subsystem: Not Supported 00:09:22.551 Fixed Capacity Management: Not Supported 00:09:22.551 Variable Capacity Management: Not Supported 00:09:22.551 Delete Endurance Group: Not Supported 00:09:22.551 Delete NVM Set: Not Supported 00:09:22.551 Extended LBA Formats Supported: Supported 00:09:22.551 Flexible Data Placement Supported: Not Supported 00:09:22.551 00:09:22.551 Controller Memory Buffer Support 00:09:22.551 ================================ 00:09:22.551 Supported: No 00:09:22.551 00:09:22.551 Persistent Memory Region Support 00:09:22.551 ================================ 00:09:22.551 Supported: No 00:09:22.551 00:09:22.551 Admin Command Set Attributes 00:09:22.551 ============================ 00:09:22.551 Security Send/Receive: Not Supported 00:09:22.551 Format NVM: Supported 00:09:22.551 Firmware Activate/Download: Not Supported 00:09:22.551 Namespace Management: Supported 00:09:22.551 Device Self-Test: Not Supported 00:09:22.551 Directives: Supported 00:09:22.551 NVMe-MI: Not Supported 00:09:22.551 Virtualization Management: Not Supported 00:09:22.551 Doorbell Buffer Config: Supported 00:09:22.551 Get LBA Status Capability: Not Supported 00:09:22.551 Command & Feature Lockdown Capability: Not Supported 00:09:22.551 Abort Command Limit: 4 00:09:22.551 Async Event Request Limit: 4 00:09:22.551 Number of Firmware Slots: N/A 00:09:22.551 Firmware Slot 1 Read-Only: N/A 00:09:22.551 Firmware Activation Without Reset: N/A 00:09:22.551 Multiple Update Detection Support: N/A 00:09:22.551 Firmware Update Granularity: No Information Provided 00:09:22.551 Per-Namespace SMART Log: Yes 00:09:22.551 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.551 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:22.551 Command Effects Log Page: Supported 00:09:22.551 Get Log Page Extended Data: Supported 00:09:22.551 Telemetry Log Pages: Not Supported 00:09:22.551 Persistent Event Log Pages: Not Supported 00:09:22.551 Supported Log Pages Log Page: May Support 00:09:22.551 Commands Supported & Effects Log Page: Not Supported 00:09:22.551 Feature Identifiers & Effects Log Page:May Support 00:09:22.551 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.551 Data Area 4 for Telemetry Log: Not Supported 00:09:22.551 Error Log Page Entries Supported: 1 00:09:22.551 Keep Alive: Not Supported 00:09:22.551 00:09:22.551 NVM Command Set Attributes 00:09:22.551 ========================== 00:09:22.551 Submission Queue Entry Size 00:09:22.551 Max: 64 00:09:22.551 Min: 64 00:09:22.551 Completion Queue Entry Size 00:09:22.551 Max: 16 00:09:22.551 Min: 16 00:09:22.551 Number of Namespaces: 256 00:09:22.551 Compare Command: Supported 00:09:22.551 Write Uncorrectable Command: Not Supported 00:09:22.551 Dataset Management Command: Supported 00:09:22.551 Write Zeroes Command: Supported 00:09:22.551 Set Features Save Field: Supported 00:09:22.551 Reservations: Not Supported 00:09:22.551 Timestamp: Supported 00:09:22.551 Copy: Supported 00:09:22.551 Volatile Write Cache: Present 00:09:22.551 Atomic Write Unit (Normal): 1 00:09:22.551 Atomic Write Unit (PFail): 1 00:09:22.551 Atomic Compare & Write Unit: 1 00:09:22.551 Fused Compare & Write: Not Supported 00:09:22.551 Scatter-Gather List 00:09:22.551 SGL Command Set: Supported 00:09:22.551 SGL Keyed: Not Supported 00:09:22.551 SGL Bit Bucket Descriptor: Not Supported 00:09:22.551 SGL Metadata Pointer: Not Supported 00:09:22.551 Oversized SGL: Not Supported 00:09:22.551 SGL Metadata Address: Not Supported 00:09:22.551 SGL Offset: Not Supported 00:09:22.551 Transport SGL Data Block: Not Supported 00:09:22.551 Replay Protected Memory Block: Not Supported 00:09:22.551 00:09:22.551 Firmware Slot Information 00:09:22.551 ========================= 00:09:22.551 Active slot: 1 00:09:22.551 Slot 1 Firmware Revision: 1.0 00:09:22.551 00:09:22.551 00:09:22.551 Commands Supported and Effects 00:09:22.551 ============================== 00:09:22.551 Admin Commands 00:09:22.551 -------------- 00:09:22.551 Delete I/O Submission Queue (00h): Supported 00:09:22.551 Create I/O Submission Queue (01h): Supported 00:09:22.551 Get Log Page (02h): Supported 00:09:22.551 Delete I/O Completion Queue (04h): Supported 00:09:22.551 Create I/O Completion Queue (05h): Supported 00:09:22.551 Identify (06h): Supported 00:09:22.551 Abort (08h): Supported 00:09:22.551 Set Features (09h): Supported 00:09:22.551 Get Features (0Ah): Supported 00:09:22.552 Asynchronous Event Request (0Ch): Supported 00:09:22.552 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.552 Directive Send (19h): Supported 00:09:22.552 Directive Receive (1Ah): Supported 00:09:22.552 Virtualization Management (1Ch): Supported 00:09:22.552 Doorbell Buffer Config (7Ch): Supported 00:09:22.552 Format NVM (80h): Supported LBA-Change 00:09:22.552 I/O Commands 00:09:22.552 ------------ 00:09:22.552 Flush (00h): Supported LBA-Change 00:09:22.552 Write (01h): Supported LBA-Change 00:09:22.552 Read (02h): Supported 00:09:22.552 Compare (05h): Supported 00:09:22.552 Write Zeroes (08h): Supported LBA-Change 00:09:22.552 Dataset Management (09h): Supported LBA-Change 00:09:22.552 Unknown (0Ch): Supported 00:09:22.552 Unknown (12h): Supported 00:09:22.552 Copy (19h): Supported LBA-Change 00:09:22.552 Unknown (1Dh): Supported LBA-Change 00:09:22.552 00:09:22.552 Error Log 00:09:22.552 ========= 00:09:22.552 00:09:22.552 Arbitration 00:09:22.552 =========== 00:09:22.552 Arbitration Burst: no limit 00:09:22.552 00:09:22.552 Power Management 00:09:22.552 ================ 00:09:22.552 Number of Power States: 1 00:09:22.552 Current Power State: Power State #0 00:09:22.552 Power State #0: 00:09:22.552 Max Power: 25.00 W 00:09:22.552 Non-Operational State: Operational 00:09:22.552 Entry Latency: 16 microseconds 00:09:22.552 Exit Latency: 4 microseconds 00:09:22.552 Relative Read Throughput: 0 00:09:22.552 Relative Read Latency: 0 00:09:22.552 Relative Write Throughput: 0 00:09:22.552 Relative Write Latency: 0 00:09:22.552 Idle Power[2024-10-15 04:33:12.040980] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 64436 terminated unexpected 00:09:22.552 : Not Reported 00:09:22.552 Active Power: Not Reported 00:09:22.552 Non-Operational Permissive Mode: Not Supported 00:09:22.552 00:09:22.552 Health Information 00:09:22.552 ================== 00:09:22.552 Critical Warnings: 00:09:22.552 Available Spare Space: OK 00:09:22.552 Temperature: OK 00:09:22.552 Device Reliability: OK 00:09:22.552 Read Only: No 00:09:22.552 Volatile Memory Backup: OK 00:09:22.552 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.552 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.552 Available Spare: 0% 00:09:22.552 Available Spare Threshold: 0% 00:09:22.552 Life Percentage Used: 0% 00:09:22.552 Data Units Read: 760 00:09:22.552 Data Units Written: 688 00:09:22.552 Host Read Commands: 36798 00:09:22.552 Host Write Commands: 36584 00:09:22.552 Controller Busy Time: 0 minutes 00:09:22.552 Power Cycles: 0 00:09:22.552 Power On Hours: 0 hours 00:09:22.552 Unsafe Shutdowns: 0 00:09:22.552 Unrecoverable Media Errors: 0 00:09:22.552 Lifetime Error Log Entries: 0 00:09:22.552 Warning Temperature Time: 0 minutes 00:09:22.552 Critical Temperature Time: 0 minutes 00:09:22.552 00:09:22.552 Number of Queues 00:09:22.552 ================ 00:09:22.552 Number of I/O Submission Queues: 64 00:09:22.552 Number of I/O Completion Queues: 64 00:09:22.552 00:09:22.552 ZNS Specific Controller Data 00:09:22.552 ============================ 00:09:22.552 Zone Append Size Limit: 0 00:09:22.552 00:09:22.552 00:09:22.552 Active Namespaces 00:09:22.552 ================= 00:09:22.552 Namespace ID:1 00:09:22.552 Error Recovery Timeout: Unlimited 00:09:22.552 Command Set Identifier: NVM (00h) 00:09:22.552 Deallocate: Supported 00:09:22.552 Deallocated/Unwritten Error: Supported 00:09:22.552 Deallocated Read Value: All 0x00 00:09:22.552 Deallocate in Write Zeroes: Not Supported 00:09:22.552 Deallocated Guard Field: 0xFFFF 00:09:22.552 Flush: Supported 00:09:22.552 Reservation: Not Supported 00:09:22.552 Metadata Transferred as: Separate Metadata Buffer 00:09:22.552 Namespace Sharing Capabilities: Private 00:09:22.552 Size (in LBAs): 1548666 (5GiB) 00:09:22.552 Capacity (in LBAs): 1548666 (5GiB) 00:09:22.552 Utilization (in LBAs): 1548666 (5GiB) 00:09:22.552 Thin Provisioning: Not Supported 00:09:22.552 Per-NS Atomic Units: No 00:09:22.552 Maximum Single Source Range Length: 128 00:09:22.552 Maximum Copy Length: 128 00:09:22.552 Maximum Source Range Count: 128 00:09:22.552 NGUID/EUI64 Never Reused: No 00:09:22.552 Namespace Write Protected: No 00:09:22.552 Number of LBA Formats: 8 00:09:22.552 Current LBA Format: LBA Format #07 00:09:22.552 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.552 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.552 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.552 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.552 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.552 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.552 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.552 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.552 00:09:22.552 NVM Specific Namespace Data 00:09:22.552 =========================== 00:09:22.552 Logical Block Storage Tag Mask: 0 00:09:22.552 Protection Information Capabilities: 00:09:22.552 16b Guard Protection Information Storage Tag Support: No 00:09:22.552 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.552 Storage Tag Check Read Support: No 00:09:22.552 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.552 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.552 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.552 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.552 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.552 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.552 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.552 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.552 ===================================================== 00:09:22.552 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.552 ===================================================== 00:09:22.552 Controller Capabilities/Features 00:09:22.552 ================================ 00:09:22.552 Vendor ID: 1b36 00:09:22.552 Subsystem Vendor ID: 1af4 00:09:22.552 Serial Number: 12341 00:09:22.552 Model Number: QEMU NVMe Ctrl 00:09:22.552 Firmware Version: 8.0.0 00:09:22.552 Recommended Arb Burst: 6 00:09:22.552 IEEE OUI Identifier: 00 54 52 00:09:22.552 Multi-path I/O 00:09:22.552 May have multiple subsystem ports: No 00:09:22.552 May have multiple controllers: No 00:09:22.552 Associated with SR-IOV VF: No 00:09:22.552 Max Data Transfer Size: 524288 00:09:22.552 Max Number of Namespaces: 256 00:09:22.552 Max Number of I/O Queues: 64 00:09:22.552 NVMe Specification Version (VS): 1.4 00:09:22.552 NVMe Specification Version (Identify): 1.4 00:09:22.552 Maximum Queue Entries: 2048 00:09:22.552 Contiguous Queues Required: Yes 00:09:22.552 Arbitration Mechanisms Supported 00:09:22.552 Weighted Round Robin: Not Supported 00:09:22.552 Vendor Specific: Not Supported 00:09:22.552 Reset Timeout: 7500 ms 00:09:22.552 Doorbell Stride: 4 bytes 00:09:22.552 NVM Subsystem Reset: Not Supported 00:09:22.552 Command Sets Supported 00:09:22.552 NVM Command Set: Supported 00:09:22.552 Boot Partition: Not Supported 00:09:22.552 Memory Page Size Minimum: 4096 bytes 00:09:22.552 Memory Page Size Maximum: 65536 bytes 00:09:22.552 Persistent Memory Region: Not Supported 00:09:22.552 Optional Asynchronous Events Supported 00:09:22.552 Namespace Attribute Notices: Supported 00:09:22.552 Firmware Activation Notices: Not Supported 00:09:22.552 ANA Change Notices: Not Supported 00:09:22.552 PLE Aggregate Log Change Notices: Not Supported 00:09:22.552 LBA Status Info Alert Notices: Not Supported 00:09:22.552 EGE Aggregate Log Change Notices: Not Supported 00:09:22.552 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.552 Zone Descriptor Change Notices: Not Supported 00:09:22.552 Discovery Log Change Notices: Not Supported 00:09:22.552 Controller Attributes 00:09:22.552 128-bit Host Identifier: Not Supported 00:09:22.552 Non-Operational Permissive Mode: Not Supported 00:09:22.553 NVM Sets: Not Supported 00:09:22.553 Read Recovery Levels: Not Supported 00:09:22.553 Endurance Groups: Not Supported 00:09:22.553 Predictable Latency Mode: Not Supported 00:09:22.553 Traffic Based Keep ALive: Not Supported 00:09:22.553 Namespace Granularity: Not Supported 00:09:22.553 SQ Associations: Not Supported 00:09:22.553 UUID List: Not Supported 00:09:22.553 Multi-Domain Subsystem: Not Supported 00:09:22.553 Fixed Capacity Management: Not Supported 00:09:22.553 Variable Capacity Management: Not Supported 00:09:22.553 Delete Endurance Group: Not Supported 00:09:22.553 Delete NVM Set: Not Supported 00:09:22.553 Extended LBA Formats Supported: Supported 00:09:22.553 Flexible Data Placement Supported: Not Supported 00:09:22.553 00:09:22.553 Controller Memory Buffer Support 00:09:22.553 ================================ 00:09:22.553 Supported: No 00:09:22.553 00:09:22.553 Persistent Memory Region Support 00:09:22.553 ================================ 00:09:22.553 Supported: No 00:09:22.553 00:09:22.553 Admin Command Set Attributes 00:09:22.553 ============================ 00:09:22.553 Security Send/Receive: Not Supported 00:09:22.553 Format NVM: Supported 00:09:22.553 Firmware Activate/Download: Not Supported 00:09:22.553 Namespace Management: Supported 00:09:22.553 Device Self-Test: Not Supported 00:09:22.553 Directives: Supported 00:09:22.553 NVMe-MI: Not Supported 00:09:22.553 Virtualization Management: Not Supported 00:09:22.553 Doorbell Buffer Config: Supported 00:09:22.553 Get LBA Status Capability: Not Supported 00:09:22.553 Command & Feature Lockdown Capability: Not Supported 00:09:22.553 Abort Command Limit: 4 00:09:22.553 Async Event Request Limit: 4 00:09:22.553 Number of Firmware Slots: N/A 00:09:22.553 Firmware Slot 1 Read-Only: N/A 00:09:22.553 Firmware Activation Without Reset: N/A 00:09:22.553 Multiple Update Detection Support: N/A 00:09:22.553 Firmware Update Granularity: No Information Provided 00:09:22.553 Per-Namespace SMART Log: Yes 00:09:22.553 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.553 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:22.553 Command Effects Log Page: Supported 00:09:22.553 Get Log Page Extended Data: Supported 00:09:22.553 Telemetry Log Pages: Not Supported 00:09:22.553 Persistent Event Log Pages: Not Supported 00:09:22.553 Supported Log Pages Log Page: May Support 00:09:22.553 Commands Supported & Effects Log Page: Not Supported 00:09:22.553 Feature Identifiers & Effects Log Page:May Support 00:09:22.553 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.553 Data Area 4 for Telemetry Log: Not Supported 00:09:22.553 Error Log Page Entries Supported: 1 00:09:22.553 Keep Alive: Not Supported 00:09:22.553 00:09:22.553 NVM Command Set Attributes 00:09:22.553 ========================== 00:09:22.553 Submission Queue Entry Size 00:09:22.553 Max: 64 00:09:22.553 Min: 64 00:09:22.553 Completion Queue Entry Size 00:09:22.553 Max: 16 00:09:22.553 Min: 16 00:09:22.553 Number of Namespaces: 256 00:09:22.553 Compare Command: Supported 00:09:22.553 Write Uncorrectable Command: Not Supported 00:09:22.553 Dataset Management Command: Supported 00:09:22.553 Write Zeroes Command: Supported 00:09:22.553 Set Features Save Field: Supported 00:09:22.553 Reservations: Not Supported 00:09:22.553 Timestamp: Supported 00:09:22.553 Copy: Supported 00:09:22.553 Volatile Write Cache: Present 00:09:22.553 Atomic Write Unit (Normal): 1 00:09:22.553 Atomic Write Unit (PFail): 1 00:09:22.553 Atomic Compare & Write Unit: 1 00:09:22.553 Fused Compare & Write: Not Supported 00:09:22.553 Scatter-Gather List 00:09:22.553 SGL Command Set: Supported 00:09:22.553 SGL Keyed: Not Supported 00:09:22.553 SGL Bit Bucket Descriptor: Not Supported 00:09:22.553 SGL Metadata Pointer: Not Supported 00:09:22.553 Oversized SGL: Not Supported 00:09:22.553 SGL Metadata Address: Not Supported 00:09:22.553 SGL Offset: Not Supported 00:09:22.553 Transport SGL Data Block: Not Supported 00:09:22.553 Replay Protected Memory Block: Not Supported 00:09:22.553 00:09:22.553 Firmware Slot Information 00:09:22.553 ========================= 00:09:22.553 Active slot: 1 00:09:22.553 Slot 1 Firmware Revision: 1.0 00:09:22.553 00:09:22.553 00:09:22.553 Commands Supported and Effects 00:09:22.553 ============================== 00:09:22.553 Admin Commands 00:09:22.553 -------------- 00:09:22.553 Delete I/O Submission Queue (00h): Supported 00:09:22.553 Create I/O Submission Queue (01h): Supported 00:09:22.553 Get Log Page (02h): Supported 00:09:22.553 Delete I/O Completion Queue (04h): Supported 00:09:22.553 Create I/O Completion Queue (05h): Supported 00:09:22.553 Identify (06h): Supported 00:09:22.553 Abort (08h): Supported 00:09:22.553 Set Features (09h): Supported 00:09:22.553 Get Features (0Ah): Supported 00:09:22.553 Asynchronous Event Request (0Ch): Supported 00:09:22.553 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.553 Directive Send (19h): Supported 00:09:22.553 Directive Receive (1Ah): Supported 00:09:22.553 Virtualization Management (1Ch): Supported 00:09:22.553 Doorbell Buffer Config (7Ch): Supported 00:09:22.553 Format NVM (80h): Supported LBA-Change 00:09:22.553 I/O Commands 00:09:22.553 ------------ 00:09:22.553 Flush (00h): Supported LBA-Change 00:09:22.553 Write (01h): Supported LBA-Change 00:09:22.553 Read (02h): Supported 00:09:22.553 Compare (05h): Supported 00:09:22.553 Write Zeroes (08h): Supported LBA-Change 00:09:22.553 Dataset Management (09h): Supported LBA-Change 00:09:22.553 Unknown (0Ch): Supported 00:09:22.553 Unknown (12h): Supported 00:09:22.553 Copy (19h): Supported LBA-Change 00:09:22.553 Unknown (1Dh): Supported LBA-Change 00:09:22.553 00:09:22.553 Error Log 00:09:22.553 ========= 00:09:22.553 00:09:22.553 Arbitration 00:09:22.553 =========== 00:09:22.553 Arbitration Burst: no limit 00:09:22.553 00:09:22.553 Power Management 00:09:22.553 ================ 00:09:22.553 Number of Power States: 1 00:09:22.553 Current Power State: Power State #0 00:09:22.553 Power State #0: 00:09:22.553 Max Power: 25.00 W 00:09:22.553 Non-Operational State: Operational 00:09:22.553 Entry Latency: 16 microseconds 00:09:22.553 Exit Latency: 4 microseconds 00:09:22.553 Relative Read Throughput: 0 00:09:22.553 Relative Read Latency: 0 00:09:22.553 Relative Write Throughput: 0 00:09:22.553 Relative Write Latency: 0 00:09:22.553 Idle Power: Not Reported 00:09:22.553 Active Power: Not Reported 00:09:22.553 Non-Operational Permissive Mode: Not Supported 00:09:22.553 00:09:22.553 Health Information 00:09:22.553 ================== 00:09:22.553 Critical Warnings: 00:09:22.553 Available Spare Space: OK 00:09:22.553 Temperature: [2024-10-15 04:33:12.042054] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 64436 terminated unexpected 00:09:22.553 OK 00:09:22.553 Device Reliability: OK 00:09:22.553 Read Only: No 00:09:22.553 Volatile Memory Backup: OK 00:09:22.553 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.553 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.553 Available Spare: 0% 00:09:22.553 Available Spare Threshold: 0% 00:09:22.553 Life Percentage Used: 0% 00:09:22.553 Data Units Read: 1154 00:09:22.553 Data Units Written: 1021 00:09:22.553 Host Read Commands: 54443 00:09:22.553 Host Write Commands: 53228 00:09:22.553 Controller Busy Time: 0 minutes 00:09:22.553 Power Cycles: 0 00:09:22.553 Power On Hours: 0 hours 00:09:22.553 Unsafe Shutdowns: 0 00:09:22.553 Unrecoverable Media Errors: 0 00:09:22.553 Lifetime Error Log Entries: 0 00:09:22.553 Warning Temperature Time: 0 minutes 00:09:22.553 Critical Temperature Time: 0 minutes 00:09:22.553 00:09:22.553 Number of Queues 00:09:22.553 ================ 00:09:22.553 Number of I/O Submission Queues: 64 00:09:22.553 Number of I/O Completion Queues: 64 00:09:22.553 00:09:22.553 ZNS Specific Controller Data 00:09:22.553 ============================ 00:09:22.553 Zone Append Size Limit: 0 00:09:22.553 00:09:22.553 00:09:22.553 Active Namespaces 00:09:22.553 ================= 00:09:22.553 Namespace ID:1 00:09:22.553 Error Recovery Timeout: Unlimited 00:09:22.553 Command Set Identifier: NVM (00h) 00:09:22.553 Deallocate: Supported 00:09:22.553 Deallocated/Unwritten Error: Supported 00:09:22.553 Deallocated Read Value: All 0x00 00:09:22.553 Deallocate in Write Zeroes: Not Supported 00:09:22.553 Deallocated Guard Field: 0xFFFF 00:09:22.553 Flush: Supported 00:09:22.554 Reservation: Not Supported 00:09:22.554 Namespace Sharing Capabilities: Private 00:09:22.554 Size (in LBAs): 1310720 (5GiB) 00:09:22.554 Capacity (in LBAs): 1310720 (5GiB) 00:09:22.554 Utilization (in LBAs): 1310720 (5GiB) 00:09:22.554 Thin Provisioning: Not Supported 00:09:22.554 Per-NS Atomic Units: No 00:09:22.554 Maximum Single Source Range Length: 128 00:09:22.554 Maximum Copy Length: 128 00:09:22.554 Maximum Source Range Count: 128 00:09:22.554 NGUID/EUI64 Never Reused: No 00:09:22.554 Namespace Write Protected: No 00:09:22.554 Number of LBA Formats: 8 00:09:22.554 Current LBA Format: LBA Format #04 00:09:22.554 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.554 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.554 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.554 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.554 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.554 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.554 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.554 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.554 00:09:22.554 NVM Specific Namespace Data 00:09:22.554 =========================== 00:09:22.554 Logical Block Storage Tag Mask: 0 00:09:22.554 Protection Information Capabilities: 00:09:22.554 16b Guard Protection Information Storage Tag Support: No 00:09:22.554 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.554 Storage Tag Check Read Support: No 00:09:22.554 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.554 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.554 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.554 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.554 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.554 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.554 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.554 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.554 ===================================================== 00:09:22.554 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.554 ===================================================== 00:09:22.554 Controller Capabilities/Features 00:09:22.554 ================================ 00:09:22.554 Vendor ID: 1b36 00:09:22.554 Subsystem Vendor ID: 1af4 00:09:22.554 Serial Number: 12343 00:09:22.554 Model Number: QEMU NVMe Ctrl 00:09:22.554 Firmware Version: 8.0.0 00:09:22.554 Recommended Arb Burst: 6 00:09:22.554 IEEE OUI Identifier: 00 54 52 00:09:22.554 Multi-path I/O 00:09:22.554 May have multiple subsystem ports: No 00:09:22.554 May have multiple controllers: Yes 00:09:22.554 Associated with SR-IOV VF: No 00:09:22.554 Max Data Transfer Size: 524288 00:09:22.554 Max Number of Namespaces: 256 00:09:22.554 Max Number of I/O Queues: 64 00:09:22.554 NVMe Specification Version (VS): 1.4 00:09:22.554 NVMe Specification Version (Identify): 1.4 00:09:22.554 Maximum Queue Entries: 2048 00:09:22.554 Contiguous Queues Required: Yes 00:09:22.554 Arbitration Mechanisms Supported 00:09:22.554 Weighted Round Robin: Not Supported 00:09:22.554 Vendor Specific: Not Supported 00:09:22.554 Reset Timeout: 7500 ms 00:09:22.554 Doorbell Stride: 4 bytes 00:09:22.554 NVM Subsystem Reset: Not Supported 00:09:22.554 Command Sets Supported 00:09:22.554 NVM Command Set: Supported 00:09:22.554 Boot Partition: Not Supported 00:09:22.554 Memory Page Size Minimum: 4096 bytes 00:09:22.554 Memory Page Size Maximum: 65536 bytes 00:09:22.554 Persistent Memory Region: Not Supported 00:09:22.554 Optional Asynchronous Events Supported 00:09:22.554 Namespace Attribute Notices: Supported 00:09:22.554 Firmware Activation Notices: Not Supported 00:09:22.554 ANA Change Notices: Not Supported 00:09:22.554 PLE Aggregate Log Change Notices: Not Supported 00:09:22.554 LBA Status Info Alert Notices: Not Supported 00:09:22.554 EGE Aggregate Log Change Notices: Not Supported 00:09:22.554 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.554 Zone Descriptor Change Notices: Not Supported 00:09:22.554 Discovery Log Change Notices: Not Supported 00:09:22.554 Controller Attributes 00:09:22.554 128-bit Host Identifier: Not Supported 00:09:22.554 Non-Operational Permissive Mode: Not Supported 00:09:22.554 NVM Sets: Not Supported 00:09:22.554 Read Recovery Levels: Not Supported 00:09:22.554 Endurance Groups: Supported 00:09:22.554 Predictable Latency Mode: Not Supported 00:09:22.554 Traffic Based Keep ALive: Not Supported 00:09:22.554 Namespace Granularity: Not Supported 00:09:22.554 SQ Associations: Not Supported 00:09:22.554 UUID List: Not Supported 00:09:22.554 Multi-Domain Subsystem: Not Supported 00:09:22.554 Fixed Capacity Management: Not Supported 00:09:22.554 Variable Capacity Management: Not Supported 00:09:22.554 Delete Endurance Group: Not Supported 00:09:22.554 Delete NVM Set: Not Supported 00:09:22.554 Extended LBA Formats Supported: Supported 00:09:22.554 Flexible Data Placement Supported: Supported 00:09:22.554 00:09:22.554 Controller Memory Buffer Support 00:09:22.554 ================================ 00:09:22.554 Supported: No 00:09:22.554 00:09:22.554 Persistent Memory Region Support 00:09:22.554 ================================ 00:09:22.554 Supported: No 00:09:22.554 00:09:22.554 Admin Command Set Attributes 00:09:22.554 ============================ 00:09:22.554 Security Send/Receive: Not Supported 00:09:22.554 Format NVM: Supported 00:09:22.554 Firmware Activate/Download: Not Supported 00:09:22.554 Namespace Management: Supported 00:09:22.554 Device Self-Test: Not Supported 00:09:22.554 Directives: Supported 00:09:22.554 NVMe-MI: Not Supported 00:09:22.554 Virtualization Management: Not Supported 00:09:22.554 Doorbell Buffer Config: Supported 00:09:22.554 Get LBA Status Capability: Not Supported 00:09:22.554 Command & Feature Lockdown Capability: Not Supported 00:09:22.554 Abort Command Limit: 4 00:09:22.554 Async Event Request Limit: 4 00:09:22.554 Number of Firmware Slots: N/A 00:09:22.554 Firmware Slot 1 Read-Only: N/A 00:09:22.554 Firmware Activation Without Reset: N/A 00:09:22.554 Multiple Update Detection Support: N/A 00:09:22.554 Firmware Update Granularity: No Information Provided 00:09:22.554 Per-Namespace SMART Log: Yes 00:09:22.554 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.554 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:22.554 Command Effects Log Page: Supported 00:09:22.554 Get Log Page Extended Data: Supported 00:09:22.554 Telemetry Log Pages: Not Supported 00:09:22.554 Persistent Event Log Pages: Not Supported 00:09:22.554 Supported Log Pages Log Page: May Support 00:09:22.554 Commands Supported & Effects Log Page: Not Supported 00:09:22.554 Feature Identifiers & Effects Log Page:May Support 00:09:22.554 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.554 Data Area 4 for Telemetry Log: Not Supported 00:09:22.554 Error Log Page Entries Supported: 1 00:09:22.554 Keep Alive: Not Supported 00:09:22.554 00:09:22.554 NVM Command Set Attributes 00:09:22.554 ========================== 00:09:22.554 Submission Queue Entry Size 00:09:22.554 Max: 64 00:09:22.554 Min: 64 00:09:22.554 Completion Queue Entry Size 00:09:22.554 Max: 16 00:09:22.554 Min: 16 00:09:22.554 Number of Namespaces: 256 00:09:22.554 Compare Command: Supported 00:09:22.554 Write Uncorrectable Command: Not Supported 00:09:22.554 Dataset Management Command: Supported 00:09:22.554 Write Zeroes Command: Supported 00:09:22.554 Set Features Save Field: Supported 00:09:22.554 Reservations: Not Supported 00:09:22.554 Timestamp: Supported 00:09:22.554 Copy: Supported 00:09:22.554 Volatile Write Cache: Present 00:09:22.554 Atomic Write Unit (Normal): 1 00:09:22.554 Atomic Write Unit (PFail): 1 00:09:22.554 Atomic Compare & Write Unit: 1 00:09:22.555 Fused Compare & Write: Not Supported 00:09:22.555 Scatter-Gather List 00:09:22.555 SGL Command Set: Supported 00:09:22.555 SGL Keyed: Not Supported 00:09:22.555 SGL Bit Bucket Descriptor: Not Supported 00:09:22.555 SGL Metadata Pointer: Not Supported 00:09:22.555 Oversized SGL: Not Supported 00:09:22.555 SGL Metadata Address: Not Supported 00:09:22.555 SGL Offset: Not Supported 00:09:22.555 Transport SGL Data Block: Not Supported 00:09:22.555 Replay Protected Memory Block: Not Supported 00:09:22.555 00:09:22.555 Firmware Slot Information 00:09:22.555 ========================= 00:09:22.555 Active slot: 1 00:09:22.555 Slot 1 Firmware Revision: 1.0 00:09:22.555 00:09:22.555 00:09:22.555 Commands Supported and Effects 00:09:22.555 ============================== 00:09:22.555 Admin Commands 00:09:22.555 -------------- 00:09:22.555 Delete I/O Submission Queue (00h): Supported 00:09:22.555 Create I/O Submission Queue (01h): Supported 00:09:22.555 Get Log Page (02h): Supported 00:09:22.555 Delete I/O Completion Queue (04h): Supported 00:09:22.555 Create I/O Completion Queue (05h): Supported 00:09:22.555 Identify (06h): Supported 00:09:22.555 Abort (08h): Supported 00:09:22.555 Set Features (09h): Supported 00:09:22.555 Get Features (0Ah): Supported 00:09:22.555 Asynchronous Event Request (0Ch): Supported 00:09:22.555 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.555 Directive Send (19h): Supported 00:09:22.555 Directive Receive (1Ah): Supported 00:09:22.555 Virtualization Management (1Ch): Supported 00:09:22.555 Doorbell Buffer Config (7Ch): Supported 00:09:22.555 Format NVM (80h): Supported LBA-Change 00:09:22.555 I/O Commands 00:09:22.555 ------------ 00:09:22.555 Flush (00h): Supported LBA-Change 00:09:22.555 Write (01h): Supported LBA-Change 00:09:22.555 Read (02h): Supported 00:09:22.555 Compare (05h): Supported 00:09:22.555 Write Zeroes (08h): Supported LBA-Change 00:09:22.555 Dataset Management (09h): Supported LBA-Change 00:09:22.555 Unknown (0Ch): Supported 00:09:22.555 Unknown (12h): Supported 00:09:22.555 Copy (19h): Supported LBA-Change 00:09:22.555 Unknown (1Dh): Supported LBA-Change 00:09:22.555 00:09:22.555 Error Log 00:09:22.555 ========= 00:09:22.555 00:09:22.555 Arbitration 00:09:22.555 =========== 00:09:22.555 Arbitration Burst: no limit 00:09:22.555 00:09:22.555 Power Management 00:09:22.555 ================ 00:09:22.555 Number of Power States: 1 00:09:22.555 Current Power State: Power State #0 00:09:22.555 Power State #0: 00:09:22.555 Max Power: 25.00 W 00:09:22.555 Non-Operational State: Operational 00:09:22.555 Entry Latency: 16 microseconds 00:09:22.555 Exit Latency: 4 microseconds 00:09:22.555 Relative Read Throughput: 0 00:09:22.555 Relative Read Latency: 0 00:09:22.555 Relative Write Throughput: 0 00:09:22.555 Relative Write Latency: 0 00:09:22.555 Idle Power: Not Reported 00:09:22.555 Active Power: Not Reported 00:09:22.555 Non-Operational Permissive Mode: Not Supported 00:09:22.555 00:09:22.555 Health Information 00:09:22.555 ================== 00:09:22.555 Critical Warnings: 00:09:22.555 Available Spare Space: OK 00:09:22.555 Temperature: OK 00:09:22.555 Device Reliability: OK 00:09:22.555 Read Only: No 00:09:22.555 Volatile Memory Backup: OK 00:09:22.555 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.555 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.555 Available Spare: 0% 00:09:22.555 Available Spare Threshold: 0% 00:09:22.555 Life Percentage Used: 0% 00:09:22.555 Data Units Read: 854 00:09:22.555 Data Units Written: 783 00:09:22.555 Host Read Commands: 37951 00:09:22.555 Host Write Commands: 37374 00:09:22.555 Controller Busy Time: 0 minutes 00:09:22.555 Power Cycles: 0 00:09:22.555 Power On Hours: 0 hours 00:09:22.555 Unsafe Shutdowns: 0 00:09:22.555 Unrecoverable Media Errors: 0 00:09:22.555 Lifetime Error Log Entries: 0 00:09:22.555 Warning Temperature Time: 0 minutes 00:09:22.555 Critical Temperature Time: 0 minutes 00:09:22.555 00:09:22.555 Number of Queues 00:09:22.555 ================ 00:09:22.555 Number of I/O Submission Queues: 64 00:09:22.555 Number of I/O Completion Queues: 64 00:09:22.555 00:09:22.555 ZNS Specific Controller Data 00:09:22.555 ============================ 00:09:22.555 Zone Append Size Limit: 0 00:09:22.555 00:09:22.555 00:09:22.555 Active Namespaces 00:09:22.555 ================= 00:09:22.555 Namespace ID:1 00:09:22.555 Error Recovery Timeout: Unlimited 00:09:22.555 Command Set Identifier: NVM (00h) 00:09:22.555 Deallocate: Supported 00:09:22.555 Deallocated/Unwritten Error: Supported 00:09:22.555 Deallocated Read Value: All 0x00 00:09:22.555 Deallocate in Write Zeroes: Not Supported 00:09:22.555 Deallocated Guard Field: 0xFFFF 00:09:22.555 Flush: Supported 00:09:22.555 Reservation: Not Supported 00:09:22.555 Namespace Sharing Capabilities: Multiple Controllers 00:09:22.555 Size (in LBAs): 262144 (1GiB) 00:09:22.555 Capacity (in LBAs): 262144 (1GiB) 00:09:22.555 Utilization (in LBAs): 262144 (1GiB) 00:09:22.555 Thin Provisioning: Not Supported 00:09:22.555 Per-NS Atomic Units: No 00:09:22.555 Maximum Single Source Range Length: 128 00:09:22.555 Maximum Copy Length: 128 00:09:22.555 Maximum Source Range Count: 128 00:09:22.555 NGUID/EUI64 Never Reused: No 00:09:22.555 Namespace Write Protected: No 00:09:22.555 Endurance group ID: 1 00:09:22.555 Number of LBA Formats: 8 00:09:22.555 Current LBA Format: LBA Format #04 00:09:22.555 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.555 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.555 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.555 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.555 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.555 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.555 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.555 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.555 00:09:22.555 Get Feature FDP: 00:09:22.555 ================ 00:09:22.555 Enabled: Yes 00:09:22.555 FDP configuration index: 0 00:09:22.555 00:09:22.555 FDP configurations log page 00:09:22.555 =========================== 00:09:22.555 Number of FDP configurations: 1 00:09:22.555 Version: 0 00:09:22.555 Size: 112 00:09:22.555 FDP Configuration Descriptor: 0 00:09:22.555 Descriptor Size: 96 00:09:22.555 Reclaim Group Identifier format: 2 00:09:22.555 FDP Volatile Write Cache: Not Present 00:09:22.555 FDP Configuration: Valid 00:09:22.555 Vendor Specific Size: 0 00:09:22.555 Number of Reclaim Groups: 2 00:09:22.555 Number of Recalim Unit Handles: 8 00:09:22.555 Max Placement Identifiers: 128 00:09:22.555 Number of Namespaces Suppprted: 256 00:09:22.555 Reclaim unit Nominal Size: 6000000 bytes 00:09:22.555 Estimated Reclaim Unit Time Limit: Not Reported 00:09:22.555 RUH Desc #000: RUH Type: Initially Isolated 00:09:22.555 RUH Desc #001: RUH Type: Initially Isolated 00:09:22.555 RUH Desc #002: RUH Type: Initially Isolated 00:09:22.555 RUH Desc #003: RUH Type: Initially Isolated 00:09:22.555 RUH Desc #004: RUH Type: Initially Isolated 00:09:22.555 RUH Desc #005: RUH Type: Initially Isolated 00:09:22.555 RUH Desc #006: RUH Type: Initially Isolated 00:09:22.555 RUH Desc #007: RUH Type: Initially Isolated 00:09:22.555 00:09:22.555 FDP reclaim unit handle usage log page 00:09:22.555 ====================================== 00:09:22.555 Number of Reclaim Unit Handles: 8 00:09:22.555 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:22.555 RUH Usage Desc #001: RUH Attributes: Unused 00:09:22.555 RUH Usage Desc #002: RUH Attributes: Unused 00:09:22.555 RUH Usage Desc #003: RUH Attributes: Unused 00:09:22.555 RUH Usage Desc #004: RUH Attributes: Unused 00:09:22.555 RUH Usage Desc #005: RUH Attributes: Unused 00:09:22.555 RUH Usage Desc #006: RUH Attributes: Unused 00:09:22.555 RUH Usage Desc #007: RUH Attributes: Unused 00:09:22.555 00:09:22.555 FDP statistics log page 00:09:22.555 ======================= 00:09:22.555 Host bytes with metadata written: 504864768 00:09:22.555 Med[2024-10-15 04:33:12.044268] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 64436 terminated unexpected 00:09:22.555 ia bytes with metadata written: 504922112 00:09:22.555 Media bytes erased: 0 00:09:22.555 00:09:22.555 FDP events log page 00:09:22.555 =================== 00:09:22.555 Number of FDP events: 0 00:09:22.555 00:09:22.555 NVM Specific Namespace Data 00:09:22.555 =========================== 00:09:22.556 Logical Block Storage Tag Mask: 0 00:09:22.556 Protection Information Capabilities: 00:09:22.556 16b Guard Protection Information Storage Tag Support: No 00:09:22.556 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.556 Storage Tag Check Read Support: No 00:09:22.556 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.556 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.556 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.556 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.556 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.556 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.556 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.556 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.556 ===================================================== 00:09:22.556 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.556 ===================================================== 00:09:22.556 Controller Capabilities/Features 00:09:22.556 ================================ 00:09:22.556 Vendor ID: 1b36 00:09:22.556 Subsystem Vendor ID: 1af4 00:09:22.556 Serial Number: 12342 00:09:22.556 Model Number: QEMU NVMe Ctrl 00:09:22.556 Firmware Version: 8.0.0 00:09:22.556 Recommended Arb Burst: 6 00:09:22.556 IEEE OUI Identifier: 00 54 52 00:09:22.556 Multi-path I/O 00:09:22.556 May have multiple subsystem ports: No 00:09:22.556 May have multiple controllers: No 00:09:22.556 Associated with SR-IOV VF: No 00:09:22.556 Max Data Transfer Size: 524288 00:09:22.556 Max Number of Namespaces: 256 00:09:22.556 Max Number of I/O Queues: 64 00:09:22.556 NVMe Specification Version (VS): 1.4 00:09:22.556 NVMe Specification Version (Identify): 1.4 00:09:22.556 Maximum Queue Entries: 2048 00:09:22.556 Contiguous Queues Required: Yes 00:09:22.556 Arbitration Mechanisms Supported 00:09:22.556 Weighted Round Robin: Not Supported 00:09:22.556 Vendor Specific: Not Supported 00:09:22.556 Reset Timeout: 7500 ms 00:09:22.556 Doorbell Stride: 4 bytes 00:09:22.556 NVM Subsystem Reset: Not Supported 00:09:22.556 Command Sets Supported 00:09:22.556 NVM Command Set: Supported 00:09:22.556 Boot Partition: Not Supported 00:09:22.556 Memory Page Size Minimum: 4096 bytes 00:09:22.556 Memory Page Size Maximum: 65536 bytes 00:09:22.556 Persistent Memory Region: Not Supported 00:09:22.556 Optional Asynchronous Events Supported 00:09:22.556 Namespace Attribute Notices: Supported 00:09:22.556 Firmware Activation Notices: Not Supported 00:09:22.556 ANA Change Notices: Not Supported 00:09:22.556 PLE Aggregate Log Change Notices: Not Supported 00:09:22.556 LBA Status Info Alert Notices: Not Supported 00:09:22.556 EGE Aggregate Log Change Notices: Not Supported 00:09:22.556 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.556 Zone Descriptor Change Notices: Not Supported 00:09:22.556 Discovery Log Change Notices: Not Supported 00:09:22.556 Controller Attributes 00:09:22.556 128-bit Host Identifier: Not Supported 00:09:22.556 Non-Operational Permissive Mode: Not Supported 00:09:22.556 NVM Sets: Not Supported 00:09:22.556 Read Recovery Levels: Not Supported 00:09:22.556 Endurance Groups: Not Supported 00:09:22.556 Predictable Latency Mode: Not Supported 00:09:22.556 Traffic Based Keep ALive: Not Supported 00:09:22.556 Namespace Granularity: Not Supported 00:09:22.556 SQ Associations: Not Supported 00:09:22.556 UUID List: Not Supported 00:09:22.556 Multi-Domain Subsystem: Not Supported 00:09:22.556 Fixed Capacity Management: Not Supported 00:09:22.556 Variable Capacity Management: Not Supported 00:09:22.556 Delete Endurance Group: Not Supported 00:09:22.556 Delete NVM Set: Not Supported 00:09:22.556 Extended LBA Formats Supported: Supported 00:09:22.556 Flexible Data Placement Supported: Not Supported 00:09:22.556 00:09:22.556 Controller Memory Buffer Support 00:09:22.556 ================================ 00:09:22.556 Supported: No 00:09:22.556 00:09:22.556 Persistent Memory Region Support 00:09:22.556 ================================ 00:09:22.556 Supported: No 00:09:22.556 00:09:22.556 Admin Command Set Attributes 00:09:22.556 ============================ 00:09:22.556 Security Send/Receive: Not Supported 00:09:22.556 Format NVM: Supported 00:09:22.556 Firmware Activate/Download: Not Supported 00:09:22.556 Namespace Management: Supported 00:09:22.556 Device Self-Test: Not Supported 00:09:22.556 Directives: Supported 00:09:22.556 NVMe-MI: Not Supported 00:09:22.556 Virtualization Management: Not Supported 00:09:22.556 Doorbell Buffer Config: Supported 00:09:22.556 Get LBA Status Capability: Not Supported 00:09:22.556 Command & Feature Lockdown Capability: Not Supported 00:09:22.556 Abort Command Limit: 4 00:09:22.556 Async Event Request Limit: 4 00:09:22.556 Number of Firmware Slots: N/A 00:09:22.556 Firmware Slot 1 Read-Only: N/A 00:09:22.556 Firmware Activation Without Reset: N/A 00:09:22.556 Multiple Update Detection Support: N/A 00:09:22.556 Firmware Update Granularity: No Information Provided 00:09:22.556 Per-Namespace SMART Log: Yes 00:09:22.556 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.556 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:22.556 Command Effects Log Page: Supported 00:09:22.556 Get Log Page Extended Data: Supported 00:09:22.556 Telemetry Log Pages: Not Supported 00:09:22.556 Persistent Event Log Pages: Not Supported 00:09:22.556 Supported Log Pages Log Page: May Support 00:09:22.556 Commands Supported & Effects Log Page: Not Supported 00:09:22.556 Feature Identifiers & Effects Log Page:May Support 00:09:22.556 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.556 Data Area 4 for Telemetry Log: Not Supported 00:09:22.556 Error Log Page Entries Supported: 1 00:09:22.556 Keep Alive: Not Supported 00:09:22.556 00:09:22.556 NVM Command Set Attributes 00:09:22.556 ========================== 00:09:22.556 Submission Queue Entry Size 00:09:22.556 Max: 64 00:09:22.556 Min: 64 00:09:22.556 Completion Queue Entry Size 00:09:22.556 Max: 16 00:09:22.556 Min: 16 00:09:22.556 Number of Namespaces: 256 00:09:22.556 Compare Command: Supported 00:09:22.556 Write Uncorrectable Command: Not Supported 00:09:22.556 Dataset Management Command: Supported 00:09:22.556 Write Zeroes Command: Supported 00:09:22.556 Set Features Save Field: Supported 00:09:22.556 Reservations: Not Supported 00:09:22.556 Timestamp: Supported 00:09:22.556 Copy: Supported 00:09:22.556 Volatile Write Cache: Present 00:09:22.556 Atomic Write Unit (Normal): 1 00:09:22.556 Atomic Write Unit (PFail): 1 00:09:22.556 Atomic Compare & Write Unit: 1 00:09:22.556 Fused Compare & Write: Not Supported 00:09:22.556 Scatter-Gather List 00:09:22.556 SGL Command Set: Supported 00:09:22.556 SGL Keyed: Not Supported 00:09:22.556 SGL Bit Bucket Descriptor: Not Supported 00:09:22.556 SGL Metadata Pointer: Not Supported 00:09:22.556 Oversized SGL: Not Supported 00:09:22.556 SGL Metadata Address: Not Supported 00:09:22.556 SGL Offset: Not Supported 00:09:22.556 Transport SGL Data Block: Not Supported 00:09:22.556 Replay Protected Memory Block: Not Supported 00:09:22.556 00:09:22.556 Firmware Slot Information 00:09:22.556 ========================= 00:09:22.556 Active slot: 1 00:09:22.556 Slot 1 Firmware Revision: 1.0 00:09:22.556 00:09:22.556 00:09:22.556 Commands Supported and Effects 00:09:22.556 ============================== 00:09:22.556 Admin Commands 00:09:22.556 -------------- 00:09:22.556 Delete I/O Submission Queue (00h): Supported 00:09:22.556 Create I/O Submission Queue (01h): Supported 00:09:22.556 Get Log Page (02h): Supported 00:09:22.556 Delete I/O Completion Queue (04h): Supported 00:09:22.556 Create I/O Completion Queue (05h): Supported 00:09:22.556 Identify (06h): Supported 00:09:22.556 Abort (08h): Supported 00:09:22.556 Set Features (09h): Supported 00:09:22.557 Get Features (0Ah): Supported 00:09:22.557 Asynchronous Event Request (0Ch): Supported 00:09:22.557 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.557 Directive Send (19h): Supported 00:09:22.557 Directive Receive (1Ah): Supported 00:09:22.557 Virtualization Management (1Ch): Supported 00:09:22.557 Doorbell Buffer Config (7Ch): Supported 00:09:22.557 Format NVM (80h): Supported LBA-Change 00:09:22.557 I/O Commands 00:09:22.557 ------------ 00:09:22.557 Flush (00h): Supported LBA-Change 00:09:22.557 Write (01h): Supported LBA-Change 00:09:22.557 Read (02h): Supported 00:09:22.557 Compare (05h): Supported 00:09:22.557 Write Zeroes (08h): Supported LBA-Change 00:09:22.557 Dataset Management (09h): Supported LBA-Change 00:09:22.557 Unknown (0Ch): Supported 00:09:22.557 Unknown (12h): Supported 00:09:22.557 Copy (19h): Supported LBA-Change 00:09:22.557 Unknown (1Dh): Supported LBA-Change 00:09:22.557 00:09:22.557 Error Log 00:09:22.557 ========= 00:09:22.557 00:09:22.557 Arbitration 00:09:22.557 =========== 00:09:22.557 Arbitration Burst: no limit 00:09:22.557 00:09:22.557 Power Management 00:09:22.557 ================ 00:09:22.557 Number of Power States: 1 00:09:22.557 Current Power State: Power State #0 00:09:22.557 Power State #0: 00:09:22.557 Max Power: 25.00 W 00:09:22.557 Non-Operational State: Operational 00:09:22.557 Entry Latency: 16 microseconds 00:09:22.557 Exit Latency: 4 microseconds 00:09:22.557 Relative Read Throughput: 0 00:09:22.557 Relative Read Latency: 0 00:09:22.557 Relative Write Throughput: 0 00:09:22.557 Relative Write Latency: 0 00:09:22.557 Idle Power: Not Reported 00:09:22.557 Active Power: Not Reported 00:09:22.557 Non-Operational Permissive Mode: Not Supported 00:09:22.557 00:09:22.557 Health Information 00:09:22.557 ================== 00:09:22.557 Critical Warnings: 00:09:22.557 Available Spare Space: OK 00:09:22.557 Temperature: OK 00:09:22.557 Device Reliability: OK 00:09:22.557 Read Only: No 00:09:22.557 Volatile Memory Backup: OK 00:09:22.557 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.557 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.557 Available Spare: 0% 00:09:22.557 Available Spare Threshold: 0% 00:09:22.557 Life Percentage Used: 0% 00:09:22.557 Data Units Read: 2405 00:09:22.557 Data Units Written: 2192 00:09:22.557 Host Read Commands: 112386 00:09:22.557 Host Write Commands: 110656 00:09:22.557 Controller Busy Time: 0 minutes 00:09:22.557 Power Cycles: 0 00:09:22.557 Power On Hours: 0 hours 00:09:22.557 Unsafe Shutdowns: 0 00:09:22.557 Unrecoverable Media Errors: 0 00:09:22.557 Lifetime Error Log Entries: 0 00:09:22.557 Warning Temperature Time: 0 minutes 00:09:22.557 Critical Temperature Time: 0 minutes 00:09:22.557 00:09:22.557 Number of Queues 00:09:22.557 ================ 00:09:22.557 Number of I/O Submission Queues: 64 00:09:22.557 Number of I/O Completion Queues: 64 00:09:22.557 00:09:22.557 ZNS Specific Controller Data 00:09:22.557 ============================ 00:09:22.557 Zone Append Size Limit: 0 00:09:22.557 00:09:22.557 00:09:22.557 Active Namespaces 00:09:22.557 ================= 00:09:22.557 Namespace ID:1 00:09:22.557 Error Recovery Timeout: Unlimited 00:09:22.557 Command Set Identifier: NVM (00h) 00:09:22.557 Deallocate: Supported 00:09:22.557 Deallocated/Unwritten Error: Supported 00:09:22.557 Deallocated Read Value: All 0x00 00:09:22.557 Deallocate in Write Zeroes: Not Supported 00:09:22.557 Deallocated Guard Field: 0xFFFF 00:09:22.557 Flush: Supported 00:09:22.557 Reservation: Not Supported 00:09:22.557 Namespace Sharing Capabilities: Private 00:09:22.557 Size (in LBAs): 1048576 (4GiB) 00:09:22.557 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.557 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.557 Thin Provisioning: Not Supported 00:09:22.557 Per-NS Atomic Units: No 00:09:22.557 Maximum Single Source Range Length: 128 00:09:22.557 Maximum Copy Length: 128 00:09:22.557 Maximum Source Range Count: 128 00:09:22.557 NGUID/EUI64 Never Reused: No 00:09:22.557 Namespace Write Protected: No 00:09:22.557 Number of LBA Formats: 8 00:09:22.557 Current LBA Format: LBA Format #04 00:09:22.557 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.557 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.557 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.557 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.557 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.557 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.557 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.557 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.557 00:09:22.557 NVM Specific Namespace Data 00:09:22.557 =========================== 00:09:22.557 Logical Block Storage Tag Mask: 0 00:09:22.557 Protection Information Capabilities: 00:09:22.557 16b Guard Protection Information Storage Tag Support: No 00:09:22.557 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.557 Storage Tag Check Read Support: No 00:09:22.557 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Namespace ID:2 00:09:22.557 Error Recovery Timeout: Unlimited 00:09:22.557 Command Set Identifier: NVM (00h) 00:09:22.557 Deallocate: Supported 00:09:22.557 Deallocated/Unwritten Error: Supported 00:09:22.557 Deallocated Read Value: All 0x00 00:09:22.557 Deallocate in Write Zeroes: Not Supported 00:09:22.557 Deallocated Guard Field: 0xFFFF 00:09:22.557 Flush: Supported 00:09:22.557 Reservation: Not Supported 00:09:22.557 Namespace Sharing Capabilities: Private 00:09:22.557 Size (in LBAs): 1048576 (4GiB) 00:09:22.557 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.557 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.557 Thin Provisioning: Not Supported 00:09:22.557 Per-NS Atomic Units: No 00:09:22.557 Maximum Single Source Range Length: 128 00:09:22.557 Maximum Copy Length: 128 00:09:22.557 Maximum Source Range Count: 128 00:09:22.557 NGUID/EUI64 Never Reused: No 00:09:22.557 Namespace Write Protected: No 00:09:22.557 Number of LBA Formats: 8 00:09:22.557 Current LBA Format: LBA Format #04 00:09:22.557 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.557 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.557 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.557 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.557 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.557 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.557 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.557 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.557 00:09:22.557 NVM Specific Namespace Data 00:09:22.557 =========================== 00:09:22.557 Logical Block Storage Tag Mask: 0 00:09:22.557 Protection Information Capabilities: 00:09:22.557 16b Guard Protection Information Storage Tag Support: No 00:09:22.557 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.557 Storage Tag Check Read Support: No 00:09:22.557 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.557 Namespace ID:3 00:09:22.557 Error Recovery Timeout: Unlimited 00:09:22.557 Command Set Identifier: NVM (00h) 00:09:22.557 Deallocate: Supported 00:09:22.557 Deallocated/Unwritten Error: Supported 00:09:22.557 Deallocated Read Value: All 0x00 00:09:22.557 Deallocate in Write Zeroes: Not Supported 00:09:22.557 Deallocated Guard Field: 0xFFFF 00:09:22.557 Flush: Supported 00:09:22.557 Reservation: Not Supported 00:09:22.557 Namespace Sharing Capabilities: Private 00:09:22.558 Size (in LBAs): 1048576 (4GiB) 00:09:22.817 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.817 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.817 Thin Provisioning: Not Supported 00:09:22.817 Per-NS Atomic Units: No 00:09:22.817 Maximum Single Source Range Length: 128 00:09:22.817 Maximum Copy Length: 128 00:09:22.817 Maximum Source Range Count: 128 00:09:22.817 NGUID/EUI64 Never Reused: No 00:09:22.817 Namespace Write Protected: No 00:09:22.817 Number of LBA Formats: 8 00:09:22.817 Current LBA Format: LBA Format #04 00:09:22.817 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.817 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.817 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.817 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.817 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.817 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.817 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.817 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.817 00:09:22.817 NVM Specific Namespace Data 00:09:22.817 =========================== 00:09:22.817 Logical Block Storage Tag Mask: 0 00:09:22.817 Protection Information Capabilities: 00:09:22.817 16b Guard Protection Information Storage Tag Support: No 00:09:22.817 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.817 Storage Tag Check Read Support: No 00:09:22.817 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.817 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.817 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.817 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.817 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.817 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.817 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.817 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.817 04:33:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:22.817 04:33:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:23.078 ===================================================== 00:09:23.078 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:23.078 ===================================================== 00:09:23.078 Controller Capabilities/Features 00:09:23.078 ================================ 00:09:23.078 Vendor ID: 1b36 00:09:23.078 Subsystem Vendor ID: 1af4 00:09:23.078 Serial Number: 12340 00:09:23.078 Model Number: QEMU NVMe Ctrl 00:09:23.078 Firmware Version: 8.0.0 00:09:23.078 Recommended Arb Burst: 6 00:09:23.078 IEEE OUI Identifier: 00 54 52 00:09:23.078 Multi-path I/O 00:09:23.078 May have multiple subsystem ports: No 00:09:23.078 May have multiple controllers: No 00:09:23.078 Associated with SR-IOV VF: No 00:09:23.078 Max Data Transfer Size: 524288 00:09:23.078 Max Number of Namespaces: 256 00:09:23.078 Max Number of I/O Queues: 64 00:09:23.078 NVMe Specification Version (VS): 1.4 00:09:23.078 NVMe Specification Version (Identify): 1.4 00:09:23.078 Maximum Queue Entries: 2048 00:09:23.078 Contiguous Queues Required: Yes 00:09:23.078 Arbitration Mechanisms Supported 00:09:23.078 Weighted Round Robin: Not Supported 00:09:23.078 Vendor Specific: Not Supported 00:09:23.078 Reset Timeout: 7500 ms 00:09:23.078 Doorbell Stride: 4 bytes 00:09:23.078 NVM Subsystem Reset: Not Supported 00:09:23.078 Command Sets Supported 00:09:23.078 NVM Command Set: Supported 00:09:23.078 Boot Partition: Not Supported 00:09:23.078 Memory Page Size Minimum: 4096 bytes 00:09:23.078 Memory Page Size Maximum: 65536 bytes 00:09:23.078 Persistent Memory Region: Not Supported 00:09:23.078 Optional Asynchronous Events Supported 00:09:23.078 Namespace Attribute Notices: Supported 00:09:23.078 Firmware Activation Notices: Not Supported 00:09:23.078 ANA Change Notices: Not Supported 00:09:23.078 PLE Aggregate Log Change Notices: Not Supported 00:09:23.078 LBA Status Info Alert Notices: Not Supported 00:09:23.078 EGE Aggregate Log Change Notices: Not Supported 00:09:23.078 Normal NVM Subsystem Shutdown event: Not Supported 00:09:23.078 Zone Descriptor Change Notices: Not Supported 00:09:23.078 Discovery Log Change Notices: Not Supported 00:09:23.078 Controller Attributes 00:09:23.078 128-bit Host Identifier: Not Supported 00:09:23.078 Non-Operational Permissive Mode: Not Supported 00:09:23.078 NVM Sets: Not Supported 00:09:23.078 Read Recovery Levels: Not Supported 00:09:23.078 Endurance Groups: Not Supported 00:09:23.078 Predictable Latency Mode: Not Supported 00:09:23.078 Traffic Based Keep ALive: Not Supported 00:09:23.078 Namespace Granularity: Not Supported 00:09:23.078 SQ Associations: Not Supported 00:09:23.078 UUID List: Not Supported 00:09:23.078 Multi-Domain Subsystem: Not Supported 00:09:23.078 Fixed Capacity Management: Not Supported 00:09:23.078 Variable Capacity Management: Not Supported 00:09:23.078 Delete Endurance Group: Not Supported 00:09:23.078 Delete NVM Set: Not Supported 00:09:23.078 Extended LBA Formats Supported: Supported 00:09:23.078 Flexible Data Placement Supported: Not Supported 00:09:23.078 00:09:23.078 Controller Memory Buffer Support 00:09:23.078 ================================ 00:09:23.078 Supported: No 00:09:23.078 00:09:23.078 Persistent Memory Region Support 00:09:23.078 ================================ 00:09:23.078 Supported: No 00:09:23.078 00:09:23.078 Admin Command Set Attributes 00:09:23.078 ============================ 00:09:23.078 Security Send/Receive: Not Supported 00:09:23.078 Format NVM: Supported 00:09:23.078 Firmware Activate/Download: Not Supported 00:09:23.078 Namespace Management: Supported 00:09:23.078 Device Self-Test: Not Supported 00:09:23.078 Directives: Supported 00:09:23.078 NVMe-MI: Not Supported 00:09:23.078 Virtualization Management: Not Supported 00:09:23.078 Doorbell Buffer Config: Supported 00:09:23.078 Get LBA Status Capability: Not Supported 00:09:23.078 Command & Feature Lockdown Capability: Not Supported 00:09:23.078 Abort Command Limit: 4 00:09:23.078 Async Event Request Limit: 4 00:09:23.078 Number of Firmware Slots: N/A 00:09:23.078 Firmware Slot 1 Read-Only: N/A 00:09:23.078 Firmware Activation Without Reset: N/A 00:09:23.078 Multiple Update Detection Support: N/A 00:09:23.078 Firmware Update Granularity: No Information Provided 00:09:23.078 Per-Namespace SMART Log: Yes 00:09:23.078 Asymmetric Namespace Access Log Page: Not Supported 00:09:23.078 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:23.078 Command Effects Log Page: Supported 00:09:23.078 Get Log Page Extended Data: Supported 00:09:23.078 Telemetry Log Pages: Not Supported 00:09:23.078 Persistent Event Log Pages: Not Supported 00:09:23.078 Supported Log Pages Log Page: May Support 00:09:23.078 Commands Supported & Effects Log Page: Not Supported 00:09:23.078 Feature Identifiers & Effects Log Page:May Support 00:09:23.078 NVMe-MI Commands & Effects Log Page: May Support 00:09:23.078 Data Area 4 for Telemetry Log: Not Supported 00:09:23.078 Error Log Page Entries Supported: 1 00:09:23.078 Keep Alive: Not Supported 00:09:23.078 00:09:23.078 NVM Command Set Attributes 00:09:23.078 ========================== 00:09:23.078 Submission Queue Entry Size 00:09:23.078 Max: 64 00:09:23.078 Min: 64 00:09:23.078 Completion Queue Entry Size 00:09:23.078 Max: 16 00:09:23.078 Min: 16 00:09:23.078 Number of Namespaces: 256 00:09:23.078 Compare Command: Supported 00:09:23.078 Write Uncorrectable Command: Not Supported 00:09:23.078 Dataset Management Command: Supported 00:09:23.078 Write Zeroes Command: Supported 00:09:23.078 Set Features Save Field: Supported 00:09:23.078 Reservations: Not Supported 00:09:23.078 Timestamp: Supported 00:09:23.078 Copy: Supported 00:09:23.078 Volatile Write Cache: Present 00:09:23.078 Atomic Write Unit (Normal): 1 00:09:23.078 Atomic Write Unit (PFail): 1 00:09:23.078 Atomic Compare & Write Unit: 1 00:09:23.078 Fused Compare & Write: Not Supported 00:09:23.078 Scatter-Gather List 00:09:23.078 SGL Command Set: Supported 00:09:23.078 SGL Keyed: Not Supported 00:09:23.078 SGL Bit Bucket Descriptor: Not Supported 00:09:23.078 SGL Metadata Pointer: Not Supported 00:09:23.078 Oversized SGL: Not Supported 00:09:23.078 SGL Metadata Address: Not Supported 00:09:23.078 SGL Offset: Not Supported 00:09:23.078 Transport SGL Data Block: Not Supported 00:09:23.078 Replay Protected Memory Block: Not Supported 00:09:23.078 00:09:23.078 Firmware Slot Information 00:09:23.079 ========================= 00:09:23.079 Active slot: 1 00:09:23.079 Slot 1 Firmware Revision: 1.0 00:09:23.079 00:09:23.079 00:09:23.079 Commands Supported and Effects 00:09:23.079 ============================== 00:09:23.079 Admin Commands 00:09:23.079 -------------- 00:09:23.079 Delete I/O Submission Queue (00h): Supported 00:09:23.079 Create I/O Submission Queue (01h): Supported 00:09:23.079 Get Log Page (02h): Supported 00:09:23.079 Delete I/O Completion Queue (04h): Supported 00:09:23.079 Create I/O Completion Queue (05h): Supported 00:09:23.079 Identify (06h): Supported 00:09:23.079 Abort (08h): Supported 00:09:23.079 Set Features (09h): Supported 00:09:23.079 Get Features (0Ah): Supported 00:09:23.079 Asynchronous Event Request (0Ch): Supported 00:09:23.079 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:23.079 Directive Send (19h): Supported 00:09:23.079 Directive Receive (1Ah): Supported 00:09:23.079 Virtualization Management (1Ch): Supported 00:09:23.079 Doorbell Buffer Config (7Ch): Supported 00:09:23.079 Format NVM (80h): Supported LBA-Change 00:09:23.079 I/O Commands 00:09:23.079 ------------ 00:09:23.079 Flush (00h): Supported LBA-Change 00:09:23.079 Write (01h): Supported LBA-Change 00:09:23.079 Read (02h): Supported 00:09:23.079 Compare (05h): Supported 00:09:23.079 Write Zeroes (08h): Supported LBA-Change 00:09:23.079 Dataset Management (09h): Supported LBA-Change 00:09:23.079 Unknown (0Ch): Supported 00:09:23.079 Unknown (12h): Supported 00:09:23.079 Copy (19h): Supported LBA-Change 00:09:23.079 Unknown (1Dh): Supported LBA-Change 00:09:23.079 00:09:23.079 Error Log 00:09:23.079 ========= 00:09:23.079 00:09:23.079 Arbitration 00:09:23.079 =========== 00:09:23.079 Arbitration Burst: no limit 00:09:23.079 00:09:23.079 Power Management 00:09:23.079 ================ 00:09:23.079 Number of Power States: 1 00:09:23.079 Current Power State: Power State #0 00:09:23.079 Power State #0: 00:09:23.079 Max Power: 25.00 W 00:09:23.079 Non-Operational State: Operational 00:09:23.079 Entry Latency: 16 microseconds 00:09:23.079 Exit Latency: 4 microseconds 00:09:23.079 Relative Read Throughput: 0 00:09:23.079 Relative Read Latency: 0 00:09:23.079 Relative Write Throughput: 0 00:09:23.079 Relative Write Latency: 0 00:09:23.079 Idle Power: Not Reported 00:09:23.079 Active Power: Not Reported 00:09:23.079 Non-Operational Permissive Mode: Not Supported 00:09:23.079 00:09:23.079 Health Information 00:09:23.079 ================== 00:09:23.079 Critical Warnings: 00:09:23.079 Available Spare Space: OK 00:09:23.079 Temperature: OK 00:09:23.079 Device Reliability: OK 00:09:23.079 Read Only: No 00:09:23.079 Volatile Memory Backup: OK 00:09:23.079 Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.079 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:23.079 Available Spare: 0% 00:09:23.079 Available Spare Threshold: 0% 00:09:23.079 Life Percentage Used: 0% 00:09:23.079 Data Units Read: 760 00:09:23.079 Data Units Written: 688 00:09:23.079 Host Read Commands: 36798 00:09:23.079 Host Write Commands: 36584 00:09:23.079 Controller Busy Time: 0 minutes 00:09:23.079 Power Cycles: 0 00:09:23.079 Power On Hours: 0 hours 00:09:23.079 Unsafe Shutdowns: 0 00:09:23.079 Unrecoverable Media Errors: 0 00:09:23.079 Lifetime Error Log Entries: 0 00:09:23.079 Warning Temperature Time: 0 minutes 00:09:23.079 Critical Temperature Time: 0 minutes 00:09:23.079 00:09:23.079 Number of Queues 00:09:23.079 ================ 00:09:23.079 Number of I/O Submission Queues: 64 00:09:23.079 Number of I/O Completion Queues: 64 00:09:23.079 00:09:23.079 ZNS Specific Controller Data 00:09:23.079 ============================ 00:09:23.079 Zone Append Size Limit: 0 00:09:23.079 00:09:23.079 00:09:23.079 Active Namespaces 00:09:23.079 ================= 00:09:23.079 Namespace ID:1 00:09:23.079 Error Recovery Timeout: Unlimited 00:09:23.079 Command Set Identifier: NVM (00h) 00:09:23.079 Deallocate: Supported 00:09:23.079 Deallocated/Unwritten Error: Supported 00:09:23.079 Deallocated Read Value: All 0x00 00:09:23.079 Deallocate in Write Zeroes: Not Supported 00:09:23.079 Deallocated Guard Field: 0xFFFF 00:09:23.079 Flush: Supported 00:09:23.079 Reservation: Not Supported 00:09:23.079 Metadata Transferred as: Separate Metadata Buffer 00:09:23.079 Namespace Sharing Capabilities: Private 00:09:23.079 Size (in LBAs): 1548666 (5GiB) 00:09:23.079 Capacity (in LBAs): 1548666 (5GiB) 00:09:23.079 Utilization (in LBAs): 1548666 (5GiB) 00:09:23.079 Thin Provisioning: Not Supported 00:09:23.079 Per-NS Atomic Units: No 00:09:23.079 Maximum Single Source Range Length: 128 00:09:23.079 Maximum Copy Length: 128 00:09:23.079 Maximum Source Range Count: 128 00:09:23.079 NGUID/EUI64 Never Reused: No 00:09:23.079 Namespace Write Protected: No 00:09:23.079 Number of LBA Formats: 8 00:09:23.079 Current LBA Format: LBA Format #07 00:09:23.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:23.079 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:23.079 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:23.079 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:23.079 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:23.079 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:23.079 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:23.079 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:23.079 00:09:23.079 NVM Specific Namespace Data 00:09:23.079 =========================== 00:09:23.079 Logical Block Storage Tag Mask: 0 00:09:23.079 Protection Information Capabilities: 00:09:23.079 16b Guard Protection Information Storage Tag Support: No 00:09:23.079 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:23.079 Storage Tag Check Read Support: No 00:09:23.079 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.079 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.079 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.079 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.079 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.079 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.079 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.079 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.079 04:33:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:23.079 04:33:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:23.339 ===================================================== 00:09:23.339 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:23.339 ===================================================== 00:09:23.339 Controller Capabilities/Features 00:09:23.339 ================================ 00:09:23.339 Vendor ID: 1b36 00:09:23.339 Subsystem Vendor ID: 1af4 00:09:23.339 Serial Number: 12341 00:09:23.339 Model Number: QEMU NVMe Ctrl 00:09:23.339 Firmware Version: 8.0.0 00:09:23.339 Recommended Arb Burst: 6 00:09:23.339 IEEE OUI Identifier: 00 54 52 00:09:23.339 Multi-path I/O 00:09:23.339 May have multiple subsystem ports: No 00:09:23.339 May have multiple controllers: No 00:09:23.339 Associated with SR-IOV VF: No 00:09:23.339 Max Data Transfer Size: 524288 00:09:23.339 Max Number of Namespaces: 256 00:09:23.339 Max Number of I/O Queues: 64 00:09:23.339 NVMe Specification Version (VS): 1.4 00:09:23.339 NVMe Specification Version (Identify): 1.4 00:09:23.339 Maximum Queue Entries: 2048 00:09:23.339 Contiguous Queues Required: Yes 00:09:23.339 Arbitration Mechanisms Supported 00:09:23.339 Weighted Round Robin: Not Supported 00:09:23.339 Vendor Specific: Not Supported 00:09:23.339 Reset Timeout: 7500 ms 00:09:23.339 Doorbell Stride: 4 bytes 00:09:23.339 NVM Subsystem Reset: Not Supported 00:09:23.339 Command Sets Supported 00:09:23.339 NVM Command Set: Supported 00:09:23.339 Boot Partition: Not Supported 00:09:23.339 Memory Page Size Minimum: 4096 bytes 00:09:23.339 Memory Page Size Maximum: 65536 bytes 00:09:23.339 Persistent Memory Region: Not Supported 00:09:23.339 Optional Asynchronous Events Supported 00:09:23.339 Namespace Attribute Notices: Supported 00:09:23.339 Firmware Activation Notices: Not Supported 00:09:23.339 ANA Change Notices: Not Supported 00:09:23.339 PLE Aggregate Log Change Notices: Not Supported 00:09:23.339 LBA Status Info Alert Notices: Not Supported 00:09:23.339 EGE Aggregate Log Change Notices: Not Supported 00:09:23.339 Normal NVM Subsystem Shutdown event: Not Supported 00:09:23.339 Zone Descriptor Change Notices: Not Supported 00:09:23.339 Discovery Log Change Notices: Not Supported 00:09:23.339 Controller Attributes 00:09:23.339 128-bit Host Identifier: Not Supported 00:09:23.339 Non-Operational Permissive Mode: Not Supported 00:09:23.339 NVM Sets: Not Supported 00:09:23.339 Read Recovery Levels: Not Supported 00:09:23.339 Endurance Groups: Not Supported 00:09:23.339 Predictable Latency Mode: Not Supported 00:09:23.339 Traffic Based Keep ALive: Not Supported 00:09:23.339 Namespace Granularity: Not Supported 00:09:23.339 SQ Associations: Not Supported 00:09:23.339 UUID List: Not Supported 00:09:23.339 Multi-Domain Subsystem: Not Supported 00:09:23.339 Fixed Capacity Management: Not Supported 00:09:23.339 Variable Capacity Management: Not Supported 00:09:23.339 Delete Endurance Group: Not Supported 00:09:23.339 Delete NVM Set: Not Supported 00:09:23.339 Extended LBA Formats Supported: Supported 00:09:23.339 Flexible Data Placement Supported: Not Supported 00:09:23.339 00:09:23.339 Controller Memory Buffer Support 00:09:23.339 ================================ 00:09:23.339 Supported: No 00:09:23.339 00:09:23.339 Persistent Memory Region Support 00:09:23.339 ================================ 00:09:23.339 Supported: No 00:09:23.339 00:09:23.339 Admin Command Set Attributes 00:09:23.339 ============================ 00:09:23.339 Security Send/Receive: Not Supported 00:09:23.339 Format NVM: Supported 00:09:23.339 Firmware Activate/Download: Not Supported 00:09:23.339 Namespace Management: Supported 00:09:23.339 Device Self-Test: Not Supported 00:09:23.339 Directives: Supported 00:09:23.339 NVMe-MI: Not Supported 00:09:23.339 Virtualization Management: Not Supported 00:09:23.339 Doorbell Buffer Config: Supported 00:09:23.339 Get LBA Status Capability: Not Supported 00:09:23.339 Command & Feature Lockdown Capability: Not Supported 00:09:23.339 Abort Command Limit: 4 00:09:23.339 Async Event Request Limit: 4 00:09:23.339 Number of Firmware Slots: N/A 00:09:23.339 Firmware Slot 1 Read-Only: N/A 00:09:23.339 Firmware Activation Without Reset: N/A 00:09:23.339 Multiple Update Detection Support: N/A 00:09:23.339 Firmware Update Granularity: No Information Provided 00:09:23.339 Per-Namespace SMART Log: Yes 00:09:23.339 Asymmetric Namespace Access Log Page: Not Supported 00:09:23.339 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:23.339 Command Effects Log Page: Supported 00:09:23.339 Get Log Page Extended Data: Supported 00:09:23.339 Telemetry Log Pages: Not Supported 00:09:23.339 Persistent Event Log Pages: Not Supported 00:09:23.339 Supported Log Pages Log Page: May Support 00:09:23.339 Commands Supported & Effects Log Page: Not Supported 00:09:23.339 Feature Identifiers & Effects Log Page:May Support 00:09:23.339 NVMe-MI Commands & Effects Log Page: May Support 00:09:23.339 Data Area 4 for Telemetry Log: Not Supported 00:09:23.339 Error Log Page Entries Supported: 1 00:09:23.339 Keep Alive: Not Supported 00:09:23.339 00:09:23.339 NVM Command Set Attributes 00:09:23.339 ========================== 00:09:23.339 Submission Queue Entry Size 00:09:23.339 Max: 64 00:09:23.339 Min: 64 00:09:23.339 Completion Queue Entry Size 00:09:23.339 Max: 16 00:09:23.339 Min: 16 00:09:23.339 Number of Namespaces: 256 00:09:23.339 Compare Command: Supported 00:09:23.339 Write Uncorrectable Command: Not Supported 00:09:23.339 Dataset Management Command: Supported 00:09:23.339 Write Zeroes Command: Supported 00:09:23.339 Set Features Save Field: Supported 00:09:23.339 Reservations: Not Supported 00:09:23.339 Timestamp: Supported 00:09:23.339 Copy: Supported 00:09:23.339 Volatile Write Cache: Present 00:09:23.339 Atomic Write Unit (Normal): 1 00:09:23.339 Atomic Write Unit (PFail): 1 00:09:23.339 Atomic Compare & Write Unit: 1 00:09:23.339 Fused Compare & Write: Not Supported 00:09:23.339 Scatter-Gather List 00:09:23.339 SGL Command Set: Supported 00:09:23.339 SGL Keyed: Not Supported 00:09:23.339 SGL Bit Bucket Descriptor: Not Supported 00:09:23.339 SGL Metadata Pointer: Not Supported 00:09:23.339 Oversized SGL: Not Supported 00:09:23.339 SGL Metadata Address: Not Supported 00:09:23.339 SGL Offset: Not Supported 00:09:23.339 Transport SGL Data Block: Not Supported 00:09:23.339 Replay Protected Memory Block: Not Supported 00:09:23.339 00:09:23.339 Firmware Slot Information 00:09:23.339 ========================= 00:09:23.339 Active slot: 1 00:09:23.339 Slot 1 Firmware Revision: 1.0 00:09:23.339 00:09:23.339 00:09:23.339 Commands Supported and Effects 00:09:23.339 ============================== 00:09:23.339 Admin Commands 00:09:23.339 -------------- 00:09:23.339 Delete I/O Submission Queue (00h): Supported 00:09:23.339 Create I/O Submission Queue (01h): Supported 00:09:23.339 Get Log Page (02h): Supported 00:09:23.339 Delete I/O Completion Queue (04h): Supported 00:09:23.339 Create I/O Completion Queue (05h): Supported 00:09:23.340 Identify (06h): Supported 00:09:23.340 Abort (08h): Supported 00:09:23.340 Set Features (09h): Supported 00:09:23.340 Get Features (0Ah): Supported 00:09:23.340 Asynchronous Event Request (0Ch): Supported 00:09:23.340 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:23.340 Directive Send (19h): Supported 00:09:23.340 Directive Receive (1Ah): Supported 00:09:23.340 Virtualization Management (1Ch): Supported 00:09:23.340 Doorbell Buffer Config (7Ch): Supported 00:09:23.340 Format NVM (80h): Supported LBA-Change 00:09:23.340 I/O Commands 00:09:23.340 ------------ 00:09:23.340 Flush (00h): Supported LBA-Change 00:09:23.340 Write (01h): Supported LBA-Change 00:09:23.340 Read (02h): Supported 00:09:23.340 Compare (05h): Supported 00:09:23.340 Write Zeroes (08h): Supported LBA-Change 00:09:23.340 Dataset Management (09h): Supported LBA-Change 00:09:23.340 Unknown (0Ch): Supported 00:09:23.340 Unknown (12h): Supported 00:09:23.340 Copy (19h): Supported LBA-Change 00:09:23.340 Unknown (1Dh): Supported LBA-Change 00:09:23.340 00:09:23.340 Error Log 00:09:23.340 ========= 00:09:23.340 00:09:23.340 Arbitration 00:09:23.340 =========== 00:09:23.340 Arbitration Burst: no limit 00:09:23.340 00:09:23.340 Power Management 00:09:23.340 ================ 00:09:23.340 Number of Power States: 1 00:09:23.340 Current Power State: Power State #0 00:09:23.340 Power State #0: 00:09:23.340 Max Power: 25.00 W 00:09:23.340 Non-Operational State: Operational 00:09:23.340 Entry Latency: 16 microseconds 00:09:23.340 Exit Latency: 4 microseconds 00:09:23.340 Relative Read Throughput: 0 00:09:23.340 Relative Read Latency: 0 00:09:23.340 Relative Write Throughput: 0 00:09:23.340 Relative Write Latency: 0 00:09:23.340 Idle Power: Not Reported 00:09:23.340 Active Power: Not Reported 00:09:23.340 Non-Operational Permissive Mode: Not Supported 00:09:23.340 00:09:23.340 Health Information 00:09:23.340 ================== 00:09:23.340 Critical Warnings: 00:09:23.340 Available Spare Space: OK 00:09:23.340 Temperature: OK 00:09:23.340 Device Reliability: OK 00:09:23.340 Read Only: No 00:09:23.340 Volatile Memory Backup: OK 00:09:23.340 Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.340 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:23.340 Available Spare: 0% 00:09:23.340 Available Spare Threshold: 0% 00:09:23.340 Life Percentage Used: 0% 00:09:23.340 Data Units Read: 1154 00:09:23.340 Data Units Written: 1021 00:09:23.340 Host Read Commands: 54443 00:09:23.340 Host Write Commands: 53228 00:09:23.340 Controller Busy Time: 0 minutes 00:09:23.340 Power Cycles: 0 00:09:23.340 Power On Hours: 0 hours 00:09:23.340 Unsafe Shutdowns: 0 00:09:23.340 Unrecoverable Media Errors: 0 00:09:23.340 Lifetime Error Log Entries: 0 00:09:23.340 Warning Temperature Time: 0 minutes 00:09:23.340 Critical Temperature Time: 0 minutes 00:09:23.340 00:09:23.340 Number of Queues 00:09:23.340 ================ 00:09:23.340 Number of I/O Submission Queues: 64 00:09:23.340 Number of I/O Completion Queues: 64 00:09:23.340 00:09:23.340 ZNS Specific Controller Data 00:09:23.340 ============================ 00:09:23.340 Zone Append Size Limit: 0 00:09:23.340 00:09:23.340 00:09:23.340 Active Namespaces 00:09:23.340 ================= 00:09:23.340 Namespace ID:1 00:09:23.340 Error Recovery Timeout: Unlimited 00:09:23.340 Command Set Identifier: NVM (00h) 00:09:23.340 Deallocate: Supported 00:09:23.340 Deallocated/Unwritten Error: Supported 00:09:23.340 Deallocated Read Value: All 0x00 00:09:23.340 Deallocate in Write Zeroes: Not Supported 00:09:23.340 Deallocated Guard Field: 0xFFFF 00:09:23.340 Flush: Supported 00:09:23.340 Reservation: Not Supported 00:09:23.340 Namespace Sharing Capabilities: Private 00:09:23.340 Size (in LBAs): 1310720 (5GiB) 00:09:23.340 Capacity (in LBAs): 1310720 (5GiB) 00:09:23.340 Utilization (in LBAs): 1310720 (5GiB) 00:09:23.340 Thin Provisioning: Not Supported 00:09:23.340 Per-NS Atomic Units: No 00:09:23.340 Maximum Single Source Range Length: 128 00:09:23.340 Maximum Copy Length: 128 00:09:23.340 Maximum Source Range Count: 128 00:09:23.340 NGUID/EUI64 Never Reused: No 00:09:23.340 Namespace Write Protected: No 00:09:23.340 Number of LBA Formats: 8 00:09:23.340 Current LBA Format: LBA Format #04 00:09:23.340 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:23.340 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:23.340 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:23.340 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:23.340 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:23.340 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:23.340 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:23.340 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:23.340 00:09:23.340 NVM Specific Namespace Data 00:09:23.340 =========================== 00:09:23.340 Logical Block Storage Tag Mask: 0 00:09:23.340 Protection Information Capabilities: 00:09:23.340 16b Guard Protection Information Storage Tag Support: No 00:09:23.340 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:23.340 Storage Tag Check Read Support: No 00:09:23.340 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.340 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.340 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.340 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.340 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.340 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.340 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.340 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.340 04:33:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:23.340 04:33:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:23.600 ===================================================== 00:09:23.600 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:23.600 ===================================================== 00:09:23.600 Controller Capabilities/Features 00:09:23.600 ================================ 00:09:23.600 Vendor ID: 1b36 00:09:23.600 Subsystem Vendor ID: 1af4 00:09:23.600 Serial Number: 12342 00:09:23.600 Model Number: QEMU NVMe Ctrl 00:09:23.600 Firmware Version: 8.0.0 00:09:23.600 Recommended Arb Burst: 6 00:09:23.600 IEEE OUI Identifier: 00 54 52 00:09:23.600 Multi-path I/O 00:09:23.600 May have multiple subsystem ports: No 00:09:23.600 May have multiple controllers: No 00:09:23.600 Associated with SR-IOV VF: No 00:09:23.600 Max Data Transfer Size: 524288 00:09:23.600 Max Number of Namespaces: 256 00:09:23.600 Max Number of I/O Queues: 64 00:09:23.600 NVMe Specification Version (VS): 1.4 00:09:23.600 NVMe Specification Version (Identify): 1.4 00:09:23.600 Maximum Queue Entries: 2048 00:09:23.600 Contiguous Queues Required: Yes 00:09:23.600 Arbitration Mechanisms Supported 00:09:23.600 Weighted Round Robin: Not Supported 00:09:23.600 Vendor Specific: Not Supported 00:09:23.600 Reset Timeout: 7500 ms 00:09:23.600 Doorbell Stride: 4 bytes 00:09:23.600 NVM Subsystem Reset: Not Supported 00:09:23.600 Command Sets Supported 00:09:23.600 NVM Command Set: Supported 00:09:23.600 Boot Partition: Not Supported 00:09:23.600 Memory Page Size Minimum: 4096 bytes 00:09:23.600 Memory Page Size Maximum: 65536 bytes 00:09:23.600 Persistent Memory Region: Not Supported 00:09:23.600 Optional Asynchronous Events Supported 00:09:23.600 Namespace Attribute Notices: Supported 00:09:23.600 Firmware Activation Notices: Not Supported 00:09:23.600 ANA Change Notices: Not Supported 00:09:23.600 PLE Aggregate Log Change Notices: Not Supported 00:09:23.600 LBA Status Info Alert Notices: Not Supported 00:09:23.600 EGE Aggregate Log Change Notices: Not Supported 00:09:23.600 Normal NVM Subsystem Shutdown event: Not Supported 00:09:23.600 Zone Descriptor Change Notices: Not Supported 00:09:23.600 Discovery Log Change Notices: Not Supported 00:09:23.600 Controller Attributes 00:09:23.600 128-bit Host Identifier: Not Supported 00:09:23.600 Non-Operational Permissive Mode: Not Supported 00:09:23.600 NVM Sets: Not Supported 00:09:23.600 Read Recovery Levels: Not Supported 00:09:23.600 Endurance Groups: Not Supported 00:09:23.600 Predictable Latency Mode: Not Supported 00:09:23.600 Traffic Based Keep ALive: Not Supported 00:09:23.600 Namespace Granularity: Not Supported 00:09:23.600 SQ Associations: Not Supported 00:09:23.600 UUID List: Not Supported 00:09:23.600 Multi-Domain Subsystem: Not Supported 00:09:23.600 Fixed Capacity Management: Not Supported 00:09:23.600 Variable Capacity Management: Not Supported 00:09:23.600 Delete Endurance Group: Not Supported 00:09:23.600 Delete NVM Set: Not Supported 00:09:23.600 Extended LBA Formats Supported: Supported 00:09:23.600 Flexible Data Placement Supported: Not Supported 00:09:23.600 00:09:23.600 Controller Memory Buffer Support 00:09:23.600 ================================ 00:09:23.600 Supported: No 00:09:23.600 00:09:23.600 Persistent Memory Region Support 00:09:23.600 ================================ 00:09:23.600 Supported: No 00:09:23.600 00:09:23.600 Admin Command Set Attributes 00:09:23.600 ============================ 00:09:23.600 Security Send/Receive: Not Supported 00:09:23.600 Format NVM: Supported 00:09:23.600 Firmware Activate/Download: Not Supported 00:09:23.600 Namespace Management: Supported 00:09:23.600 Device Self-Test: Not Supported 00:09:23.600 Directives: Supported 00:09:23.600 NVMe-MI: Not Supported 00:09:23.600 Virtualization Management: Not Supported 00:09:23.600 Doorbell Buffer Config: Supported 00:09:23.600 Get LBA Status Capability: Not Supported 00:09:23.600 Command & Feature Lockdown Capability: Not Supported 00:09:23.600 Abort Command Limit: 4 00:09:23.600 Async Event Request Limit: 4 00:09:23.600 Number of Firmware Slots: N/A 00:09:23.600 Firmware Slot 1 Read-Only: N/A 00:09:23.600 Firmware Activation Without Reset: N/A 00:09:23.600 Multiple Update Detection Support: N/A 00:09:23.600 Firmware Update Granularity: No Information Provided 00:09:23.600 Per-Namespace SMART Log: Yes 00:09:23.600 Asymmetric Namespace Access Log Page: Not Supported 00:09:23.600 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:23.600 Command Effects Log Page: Supported 00:09:23.600 Get Log Page Extended Data: Supported 00:09:23.600 Telemetry Log Pages: Not Supported 00:09:23.600 Persistent Event Log Pages: Not Supported 00:09:23.600 Supported Log Pages Log Page: May Support 00:09:23.600 Commands Supported & Effects Log Page: Not Supported 00:09:23.600 Feature Identifiers & Effects Log Page:May Support 00:09:23.600 NVMe-MI Commands & Effects Log Page: May Support 00:09:23.600 Data Area 4 for Telemetry Log: Not Supported 00:09:23.601 Error Log Page Entries Supported: 1 00:09:23.601 Keep Alive: Not Supported 00:09:23.601 00:09:23.601 NVM Command Set Attributes 00:09:23.601 ========================== 00:09:23.601 Submission Queue Entry Size 00:09:23.601 Max: 64 00:09:23.601 Min: 64 00:09:23.601 Completion Queue Entry Size 00:09:23.601 Max: 16 00:09:23.601 Min: 16 00:09:23.601 Number of Namespaces: 256 00:09:23.601 Compare Command: Supported 00:09:23.601 Write Uncorrectable Command: Not Supported 00:09:23.601 Dataset Management Command: Supported 00:09:23.601 Write Zeroes Command: Supported 00:09:23.601 Set Features Save Field: Supported 00:09:23.601 Reservations: Not Supported 00:09:23.601 Timestamp: Supported 00:09:23.601 Copy: Supported 00:09:23.601 Volatile Write Cache: Present 00:09:23.601 Atomic Write Unit (Normal): 1 00:09:23.601 Atomic Write Unit (PFail): 1 00:09:23.601 Atomic Compare & Write Unit: 1 00:09:23.601 Fused Compare & Write: Not Supported 00:09:23.601 Scatter-Gather List 00:09:23.601 SGL Command Set: Supported 00:09:23.601 SGL Keyed: Not Supported 00:09:23.601 SGL Bit Bucket Descriptor: Not Supported 00:09:23.601 SGL Metadata Pointer: Not Supported 00:09:23.601 Oversized SGL: Not Supported 00:09:23.601 SGL Metadata Address: Not Supported 00:09:23.601 SGL Offset: Not Supported 00:09:23.601 Transport SGL Data Block: Not Supported 00:09:23.601 Replay Protected Memory Block: Not Supported 00:09:23.601 00:09:23.601 Firmware Slot Information 00:09:23.601 ========================= 00:09:23.601 Active slot: 1 00:09:23.601 Slot 1 Firmware Revision: 1.0 00:09:23.601 00:09:23.601 00:09:23.601 Commands Supported and Effects 00:09:23.601 ============================== 00:09:23.601 Admin Commands 00:09:23.601 -------------- 00:09:23.601 Delete I/O Submission Queue (00h): Supported 00:09:23.601 Create I/O Submission Queue (01h): Supported 00:09:23.601 Get Log Page (02h): Supported 00:09:23.601 Delete I/O Completion Queue (04h): Supported 00:09:23.601 Create I/O Completion Queue (05h): Supported 00:09:23.601 Identify (06h): Supported 00:09:23.601 Abort (08h): Supported 00:09:23.601 Set Features (09h): Supported 00:09:23.601 Get Features (0Ah): Supported 00:09:23.601 Asynchronous Event Request (0Ch): Supported 00:09:23.601 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:23.601 Directive Send (19h): Supported 00:09:23.601 Directive Receive (1Ah): Supported 00:09:23.601 Virtualization Management (1Ch): Supported 00:09:23.601 Doorbell Buffer Config (7Ch): Supported 00:09:23.601 Format NVM (80h): Supported LBA-Change 00:09:23.601 I/O Commands 00:09:23.601 ------------ 00:09:23.601 Flush (00h): Supported LBA-Change 00:09:23.601 Write (01h): Supported LBA-Change 00:09:23.601 Read (02h): Supported 00:09:23.601 Compare (05h): Supported 00:09:23.601 Write Zeroes (08h): Supported LBA-Change 00:09:23.601 Dataset Management (09h): Supported LBA-Change 00:09:23.601 Unknown (0Ch): Supported 00:09:23.601 Unknown (12h): Supported 00:09:23.601 Copy (19h): Supported LBA-Change 00:09:23.601 Unknown (1Dh): Supported LBA-Change 00:09:23.601 00:09:23.601 Error Log 00:09:23.601 ========= 00:09:23.601 00:09:23.601 Arbitration 00:09:23.601 =========== 00:09:23.601 Arbitration Burst: no limit 00:09:23.601 00:09:23.601 Power Management 00:09:23.601 ================ 00:09:23.601 Number of Power States: 1 00:09:23.601 Current Power State: Power State #0 00:09:23.601 Power State #0: 00:09:23.601 Max Power: 25.00 W 00:09:23.601 Non-Operational State: Operational 00:09:23.601 Entry Latency: 16 microseconds 00:09:23.601 Exit Latency: 4 microseconds 00:09:23.601 Relative Read Throughput: 0 00:09:23.601 Relative Read Latency: 0 00:09:23.601 Relative Write Throughput: 0 00:09:23.601 Relative Write Latency: 0 00:09:23.601 Idle Power: Not Reported 00:09:23.601 Active Power: Not Reported 00:09:23.601 Non-Operational Permissive Mode: Not Supported 00:09:23.601 00:09:23.601 Health Information 00:09:23.601 ================== 00:09:23.601 Critical Warnings: 00:09:23.601 Available Spare Space: OK 00:09:23.601 Temperature: OK 00:09:23.601 Device Reliability: OK 00:09:23.601 Read Only: No 00:09:23.601 Volatile Memory Backup: OK 00:09:23.601 Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.601 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:23.601 Available Spare: 0% 00:09:23.601 Available Spare Threshold: 0% 00:09:23.601 Life Percentage Used: 0% 00:09:23.601 Data Units Read: 2405 00:09:23.601 Data Units Written: 2192 00:09:23.601 Host Read Commands: 112386 00:09:23.601 Host Write Commands: 110656 00:09:23.601 Controller Busy Time: 0 minutes 00:09:23.601 Power Cycles: 0 00:09:23.601 Power On Hours: 0 hours 00:09:23.601 Unsafe Shutdowns: 0 00:09:23.601 Unrecoverable Media Errors: 0 00:09:23.601 Lifetime Error Log Entries: 0 00:09:23.601 Warning Temperature Time: 0 minutes 00:09:23.601 Critical Temperature Time: 0 minutes 00:09:23.601 00:09:23.601 Number of Queues 00:09:23.601 ================ 00:09:23.601 Number of I/O Submission Queues: 64 00:09:23.601 Number of I/O Completion Queues: 64 00:09:23.601 00:09:23.601 ZNS Specific Controller Data 00:09:23.601 ============================ 00:09:23.601 Zone Append Size Limit: 0 00:09:23.601 00:09:23.601 00:09:23.601 Active Namespaces 00:09:23.601 ================= 00:09:23.601 Namespace ID:1 00:09:23.601 Error Recovery Timeout: Unlimited 00:09:23.601 Command Set Identifier: NVM (00h) 00:09:23.601 Deallocate: Supported 00:09:23.601 Deallocated/Unwritten Error: Supported 00:09:23.601 Deallocated Read Value: All 0x00 00:09:23.601 Deallocate in Write Zeroes: Not Supported 00:09:23.601 Deallocated Guard Field: 0xFFFF 00:09:23.601 Flush: Supported 00:09:23.601 Reservation: Not Supported 00:09:23.601 Namespace Sharing Capabilities: Private 00:09:23.601 Size (in LBAs): 1048576 (4GiB) 00:09:23.601 Capacity (in LBAs): 1048576 (4GiB) 00:09:23.601 Utilization (in LBAs): 1048576 (4GiB) 00:09:23.601 Thin Provisioning: Not Supported 00:09:23.601 Per-NS Atomic Units: No 00:09:23.601 Maximum Single Source Range Length: 128 00:09:23.601 Maximum Copy Length: 128 00:09:23.601 Maximum Source Range Count: 128 00:09:23.601 NGUID/EUI64 Never Reused: No 00:09:23.601 Namespace Write Protected: No 00:09:23.601 Number of LBA Formats: 8 00:09:23.601 Current LBA Format: LBA Format #04 00:09:23.601 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:23.601 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:23.601 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:23.601 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:23.601 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:23.601 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:23.601 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:23.601 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:23.601 00:09:23.601 NVM Specific Namespace Data 00:09:23.601 =========================== 00:09:23.601 Logical Block Storage Tag Mask: 0 00:09:23.601 Protection Information Capabilities: 00:09:23.601 16b Guard Protection Information Storage Tag Support: No 00:09:23.601 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:23.601 Storage Tag Check Read Support: No 00:09:23.601 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.601 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.601 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.601 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.601 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.601 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.601 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.601 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.601 Namespace ID:2 00:09:23.601 Error Recovery Timeout: Unlimited 00:09:23.601 Command Set Identifier: NVM (00h) 00:09:23.601 Deallocate: Supported 00:09:23.601 Deallocated/Unwritten Error: Supported 00:09:23.601 Deallocated Read Value: All 0x00 00:09:23.601 Deallocate in Write Zeroes: Not Supported 00:09:23.601 Deallocated Guard Field: 0xFFFF 00:09:23.601 Flush: Supported 00:09:23.601 Reservation: Not Supported 00:09:23.601 Namespace Sharing Capabilities: Private 00:09:23.601 Size (in LBAs): 1048576 (4GiB) 00:09:23.601 Capacity (in LBAs): 1048576 (4GiB) 00:09:23.601 Utilization (in LBAs): 1048576 (4GiB) 00:09:23.601 Thin Provisioning: Not Supported 00:09:23.601 Per-NS Atomic Units: No 00:09:23.601 Maximum Single Source Range Length: 128 00:09:23.601 Maximum Copy Length: 128 00:09:23.601 Maximum Source Range Count: 128 00:09:23.601 NGUID/EUI64 Never Reused: No 00:09:23.601 Namespace Write Protected: No 00:09:23.601 Number of LBA Formats: 8 00:09:23.601 Current LBA Format: LBA Format #04 00:09:23.601 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:23.601 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:23.601 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:23.601 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:23.601 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:23.601 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:23.601 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:23.601 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:23.602 00:09:23.602 NVM Specific Namespace Data 00:09:23.602 =========================== 00:09:23.602 Logical Block Storage Tag Mask: 0 00:09:23.602 Protection Information Capabilities: 00:09:23.602 16b Guard Protection Information Storage Tag Support: No 00:09:23.602 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:23.602 Storage Tag Check Read Support: No 00:09:23.602 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Namespace ID:3 00:09:23.602 Error Recovery Timeout: Unlimited 00:09:23.602 Command Set Identifier: NVM (00h) 00:09:23.602 Deallocate: Supported 00:09:23.602 Deallocated/Unwritten Error: Supported 00:09:23.602 Deallocated Read Value: All 0x00 00:09:23.602 Deallocate in Write Zeroes: Not Supported 00:09:23.602 Deallocated Guard Field: 0xFFFF 00:09:23.602 Flush: Supported 00:09:23.602 Reservation: Not Supported 00:09:23.602 Namespace Sharing Capabilities: Private 00:09:23.602 Size (in LBAs): 1048576 (4GiB) 00:09:23.602 Capacity (in LBAs): 1048576 (4GiB) 00:09:23.602 Utilization (in LBAs): 1048576 (4GiB) 00:09:23.602 Thin Provisioning: Not Supported 00:09:23.602 Per-NS Atomic Units: No 00:09:23.602 Maximum Single Source Range Length: 128 00:09:23.602 Maximum Copy Length: 128 00:09:23.602 Maximum Source Range Count: 128 00:09:23.602 NGUID/EUI64 Never Reused: No 00:09:23.602 Namespace Write Protected: No 00:09:23.602 Number of LBA Formats: 8 00:09:23.602 Current LBA Format: LBA Format #04 00:09:23.602 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:23.602 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:23.602 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:23.602 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:23.602 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:23.602 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:23.602 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:23.602 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:23.602 00:09:23.602 NVM Specific Namespace Data 00:09:23.602 =========================== 00:09:23.602 Logical Block Storage Tag Mask: 0 00:09:23.602 Protection Information Capabilities: 00:09:23.602 16b Guard Protection Information Storage Tag Support: No 00:09:23.602 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:23.602 Storage Tag Check Read Support: No 00:09:23.602 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.602 04:33:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:23.602 04:33:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:23.863 ===================================================== 00:09:23.863 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:23.863 ===================================================== 00:09:23.863 Controller Capabilities/Features 00:09:23.863 ================================ 00:09:23.863 Vendor ID: 1b36 00:09:23.863 Subsystem Vendor ID: 1af4 00:09:23.863 Serial Number: 12343 00:09:23.863 Model Number: QEMU NVMe Ctrl 00:09:23.863 Firmware Version: 8.0.0 00:09:23.863 Recommended Arb Burst: 6 00:09:23.863 IEEE OUI Identifier: 00 54 52 00:09:23.863 Multi-path I/O 00:09:23.863 May have multiple subsystem ports: No 00:09:23.863 May have multiple controllers: Yes 00:09:23.863 Associated with SR-IOV VF: No 00:09:23.863 Max Data Transfer Size: 524288 00:09:23.863 Max Number of Namespaces: 256 00:09:23.863 Max Number of I/O Queues: 64 00:09:23.863 NVMe Specification Version (VS): 1.4 00:09:23.863 NVMe Specification Version (Identify): 1.4 00:09:23.863 Maximum Queue Entries: 2048 00:09:23.863 Contiguous Queues Required: Yes 00:09:23.863 Arbitration Mechanisms Supported 00:09:23.863 Weighted Round Robin: Not Supported 00:09:23.863 Vendor Specific: Not Supported 00:09:23.863 Reset Timeout: 7500 ms 00:09:23.863 Doorbell Stride: 4 bytes 00:09:23.863 NVM Subsystem Reset: Not Supported 00:09:23.863 Command Sets Supported 00:09:23.863 NVM Command Set: Supported 00:09:23.863 Boot Partition: Not Supported 00:09:23.863 Memory Page Size Minimum: 4096 bytes 00:09:23.863 Memory Page Size Maximum: 65536 bytes 00:09:23.863 Persistent Memory Region: Not Supported 00:09:23.863 Optional Asynchronous Events Supported 00:09:23.863 Namespace Attribute Notices: Supported 00:09:23.863 Firmware Activation Notices: Not Supported 00:09:23.863 ANA Change Notices: Not Supported 00:09:23.863 PLE Aggregate Log Change Notices: Not Supported 00:09:23.863 LBA Status Info Alert Notices: Not Supported 00:09:23.863 EGE Aggregate Log Change Notices: Not Supported 00:09:23.863 Normal NVM Subsystem Shutdown event: Not Supported 00:09:23.863 Zone Descriptor Change Notices: Not Supported 00:09:23.863 Discovery Log Change Notices: Not Supported 00:09:23.863 Controller Attributes 00:09:23.863 128-bit Host Identifier: Not Supported 00:09:23.863 Non-Operational Permissive Mode: Not Supported 00:09:23.863 NVM Sets: Not Supported 00:09:23.863 Read Recovery Levels: Not Supported 00:09:23.863 Endurance Groups: Supported 00:09:23.863 Predictable Latency Mode: Not Supported 00:09:23.863 Traffic Based Keep ALive: Not Supported 00:09:23.863 Namespace Granularity: Not Supported 00:09:23.863 SQ Associations: Not Supported 00:09:23.863 UUID List: Not Supported 00:09:23.863 Multi-Domain Subsystem: Not Supported 00:09:23.863 Fixed Capacity Management: Not Supported 00:09:23.863 Variable Capacity Management: Not Supported 00:09:23.863 Delete Endurance Group: Not Supported 00:09:23.863 Delete NVM Set: Not Supported 00:09:23.863 Extended LBA Formats Supported: Supported 00:09:23.863 Flexible Data Placement Supported: Supported 00:09:23.863 00:09:23.863 Controller Memory Buffer Support 00:09:23.863 ================================ 00:09:23.863 Supported: No 00:09:23.863 00:09:23.863 Persistent Memory Region Support 00:09:23.863 ================================ 00:09:23.863 Supported: No 00:09:23.863 00:09:23.863 Admin Command Set Attributes 00:09:23.863 ============================ 00:09:23.863 Security Send/Receive: Not Supported 00:09:23.863 Format NVM: Supported 00:09:23.863 Firmware Activate/Download: Not Supported 00:09:23.863 Namespace Management: Supported 00:09:23.863 Device Self-Test: Not Supported 00:09:23.863 Directives: Supported 00:09:23.863 NVMe-MI: Not Supported 00:09:23.863 Virtualization Management: Not Supported 00:09:23.863 Doorbell Buffer Config: Supported 00:09:23.863 Get LBA Status Capability: Not Supported 00:09:23.863 Command & Feature Lockdown Capability: Not Supported 00:09:23.863 Abort Command Limit: 4 00:09:23.863 Async Event Request Limit: 4 00:09:23.863 Number of Firmware Slots: N/A 00:09:23.863 Firmware Slot 1 Read-Only: N/A 00:09:23.863 Firmware Activation Without Reset: N/A 00:09:23.863 Multiple Update Detection Support: N/A 00:09:23.863 Firmware Update Granularity: No Information Provided 00:09:23.863 Per-Namespace SMART Log: Yes 00:09:23.863 Asymmetric Namespace Access Log Page: Not Supported 00:09:23.863 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:23.863 Command Effects Log Page: Supported 00:09:23.863 Get Log Page Extended Data: Supported 00:09:23.863 Telemetry Log Pages: Not Supported 00:09:23.863 Persistent Event Log Pages: Not Supported 00:09:23.863 Supported Log Pages Log Page: May Support 00:09:23.863 Commands Supported & Effects Log Page: Not Supported 00:09:23.863 Feature Identifiers & Effects Log Page:May Support 00:09:23.863 NVMe-MI Commands & Effects Log Page: May Support 00:09:23.863 Data Area 4 for Telemetry Log: Not Supported 00:09:23.863 Error Log Page Entries Supported: 1 00:09:23.863 Keep Alive: Not Supported 00:09:23.863 00:09:23.863 NVM Command Set Attributes 00:09:23.863 ========================== 00:09:23.863 Submission Queue Entry Size 00:09:23.863 Max: 64 00:09:23.863 Min: 64 00:09:23.863 Completion Queue Entry Size 00:09:23.863 Max: 16 00:09:23.863 Min: 16 00:09:23.863 Number of Namespaces: 256 00:09:23.863 Compare Command: Supported 00:09:23.863 Write Uncorrectable Command: Not Supported 00:09:23.863 Dataset Management Command: Supported 00:09:23.863 Write Zeroes Command: Supported 00:09:23.863 Set Features Save Field: Supported 00:09:23.863 Reservations: Not Supported 00:09:23.863 Timestamp: Supported 00:09:23.863 Copy: Supported 00:09:23.863 Volatile Write Cache: Present 00:09:23.863 Atomic Write Unit (Normal): 1 00:09:23.863 Atomic Write Unit (PFail): 1 00:09:23.863 Atomic Compare & Write Unit: 1 00:09:23.863 Fused Compare & Write: Not Supported 00:09:23.863 Scatter-Gather List 00:09:23.863 SGL Command Set: Supported 00:09:23.863 SGL Keyed: Not Supported 00:09:23.863 SGL Bit Bucket Descriptor: Not Supported 00:09:23.863 SGL Metadata Pointer: Not Supported 00:09:23.863 Oversized SGL: Not Supported 00:09:23.863 SGL Metadata Address: Not Supported 00:09:23.863 SGL Offset: Not Supported 00:09:23.863 Transport SGL Data Block: Not Supported 00:09:23.863 Replay Protected Memory Block: Not Supported 00:09:23.863 00:09:23.863 Firmware Slot Information 00:09:23.863 ========================= 00:09:23.863 Active slot: 1 00:09:23.863 Slot 1 Firmware Revision: 1.0 00:09:23.863 00:09:23.863 00:09:23.863 Commands Supported and Effects 00:09:23.863 ============================== 00:09:23.863 Admin Commands 00:09:23.863 -------------- 00:09:23.863 Delete I/O Submission Queue (00h): Supported 00:09:23.863 Create I/O Submission Queue (01h): Supported 00:09:23.863 Get Log Page (02h): Supported 00:09:23.863 Delete I/O Completion Queue (04h): Supported 00:09:23.863 Create I/O Completion Queue (05h): Supported 00:09:23.863 Identify (06h): Supported 00:09:23.863 Abort (08h): Supported 00:09:23.863 Set Features (09h): Supported 00:09:23.863 Get Features (0Ah): Supported 00:09:23.863 Asynchronous Event Request (0Ch): Supported 00:09:23.863 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:23.863 Directive Send (19h): Supported 00:09:23.863 Directive Receive (1Ah): Supported 00:09:23.863 Virtualization Management (1Ch): Supported 00:09:23.863 Doorbell Buffer Config (7Ch): Supported 00:09:23.863 Format NVM (80h): Supported LBA-Change 00:09:23.863 I/O Commands 00:09:23.863 ------------ 00:09:23.863 Flush (00h): Supported LBA-Change 00:09:23.863 Write (01h): Supported LBA-Change 00:09:23.863 Read (02h): Supported 00:09:23.863 Compare (05h): Supported 00:09:23.863 Write Zeroes (08h): Supported LBA-Change 00:09:23.863 Dataset Management (09h): Supported LBA-Change 00:09:23.863 Unknown (0Ch): Supported 00:09:23.863 Unknown (12h): Supported 00:09:23.863 Copy (19h): Supported LBA-Change 00:09:23.863 Unknown (1Dh): Supported LBA-Change 00:09:23.863 00:09:23.863 Error Log 00:09:23.863 ========= 00:09:23.863 00:09:23.863 Arbitration 00:09:23.863 =========== 00:09:23.863 Arbitration Burst: no limit 00:09:23.863 00:09:23.863 Power Management 00:09:23.863 ================ 00:09:23.863 Number of Power States: 1 00:09:23.863 Current Power State: Power State #0 00:09:23.863 Power State #0: 00:09:23.863 Max Power: 25.00 W 00:09:23.863 Non-Operational State: Operational 00:09:23.864 Entry Latency: 16 microseconds 00:09:23.864 Exit Latency: 4 microseconds 00:09:23.864 Relative Read Throughput: 0 00:09:23.864 Relative Read Latency: 0 00:09:23.864 Relative Write Throughput: 0 00:09:23.864 Relative Write Latency: 0 00:09:23.864 Idle Power: Not Reported 00:09:23.864 Active Power: Not Reported 00:09:23.864 Non-Operational Permissive Mode: Not Supported 00:09:23.864 00:09:23.864 Health Information 00:09:23.864 ================== 00:09:23.864 Critical Warnings: 00:09:23.864 Available Spare Space: OK 00:09:23.864 Temperature: OK 00:09:23.864 Device Reliability: OK 00:09:23.864 Read Only: No 00:09:23.864 Volatile Memory Backup: OK 00:09:23.864 Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.864 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:23.864 Available Spare: 0% 00:09:23.864 Available Spare Threshold: 0% 00:09:23.864 Life Percentage Used: 0% 00:09:23.864 Data Units Read: 854 00:09:23.864 Data Units Written: 783 00:09:23.864 Host Read Commands: 37951 00:09:23.864 Host Write Commands: 37374 00:09:23.864 Controller Busy Time: 0 minutes 00:09:23.864 Power Cycles: 0 00:09:23.864 Power On Hours: 0 hours 00:09:23.864 Unsafe Shutdowns: 0 00:09:23.864 Unrecoverable Media Errors: 0 00:09:23.864 Lifetime Error Log Entries: 0 00:09:23.864 Warning Temperature Time: 0 minutes 00:09:23.864 Critical Temperature Time: 0 minutes 00:09:23.864 00:09:23.864 Number of Queues 00:09:23.864 ================ 00:09:23.864 Number of I/O Submission Queues: 64 00:09:23.864 Number of I/O Completion Queues: 64 00:09:23.864 00:09:23.864 ZNS Specific Controller Data 00:09:23.864 ============================ 00:09:23.864 Zone Append Size Limit: 0 00:09:23.864 00:09:23.864 00:09:23.864 Active Namespaces 00:09:23.864 ================= 00:09:23.864 Namespace ID:1 00:09:23.864 Error Recovery Timeout: Unlimited 00:09:23.864 Command Set Identifier: NVM (00h) 00:09:23.864 Deallocate: Supported 00:09:23.864 Deallocated/Unwritten Error: Supported 00:09:23.864 Deallocated Read Value: All 0x00 00:09:23.864 Deallocate in Write Zeroes: Not Supported 00:09:23.864 Deallocated Guard Field: 0xFFFF 00:09:23.864 Flush: Supported 00:09:23.864 Reservation: Not Supported 00:09:23.864 Namespace Sharing Capabilities: Multiple Controllers 00:09:23.864 Size (in LBAs): 262144 (1GiB) 00:09:23.864 Capacity (in LBAs): 262144 (1GiB) 00:09:23.864 Utilization (in LBAs): 262144 (1GiB) 00:09:23.864 Thin Provisioning: Not Supported 00:09:23.864 Per-NS Atomic Units: No 00:09:23.864 Maximum Single Source Range Length: 128 00:09:23.864 Maximum Copy Length: 128 00:09:23.864 Maximum Source Range Count: 128 00:09:23.864 NGUID/EUI64 Never Reused: No 00:09:23.864 Namespace Write Protected: No 00:09:23.864 Endurance group ID: 1 00:09:23.864 Number of LBA Formats: 8 00:09:23.864 Current LBA Format: LBA Format #04 00:09:23.864 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:23.864 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:23.864 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:23.864 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:23.864 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:23.864 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:23.864 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:23.864 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:23.864 00:09:23.864 Get Feature FDP: 00:09:23.864 ================ 00:09:23.864 Enabled: Yes 00:09:23.864 FDP configuration index: 0 00:09:23.864 00:09:23.864 FDP configurations log page 00:09:23.864 =========================== 00:09:23.864 Number of FDP configurations: 1 00:09:23.864 Version: 0 00:09:23.864 Size: 112 00:09:23.864 FDP Configuration Descriptor: 0 00:09:23.864 Descriptor Size: 96 00:09:23.864 Reclaim Group Identifier format: 2 00:09:23.864 FDP Volatile Write Cache: Not Present 00:09:23.864 FDP Configuration: Valid 00:09:23.864 Vendor Specific Size: 0 00:09:23.864 Number of Reclaim Groups: 2 00:09:23.864 Number of Recalim Unit Handles: 8 00:09:23.864 Max Placement Identifiers: 128 00:09:23.864 Number of Namespaces Suppprted: 256 00:09:23.864 Reclaim unit Nominal Size: 6000000 bytes 00:09:23.864 Estimated Reclaim Unit Time Limit: Not Reported 00:09:23.864 RUH Desc #000: RUH Type: Initially Isolated 00:09:23.864 RUH Desc #001: RUH Type: Initially Isolated 00:09:23.864 RUH Desc #002: RUH Type: Initially Isolated 00:09:23.864 RUH Desc #003: RUH Type: Initially Isolated 00:09:23.864 RUH Desc #004: RUH Type: Initially Isolated 00:09:23.864 RUH Desc #005: RUH Type: Initially Isolated 00:09:23.864 RUH Desc #006: RUH Type: Initially Isolated 00:09:23.864 RUH Desc #007: RUH Type: Initially Isolated 00:09:23.864 00:09:23.864 FDP reclaim unit handle usage log page 00:09:23.864 ====================================== 00:09:23.864 Number of Reclaim Unit Handles: 8 00:09:23.864 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:23.864 RUH Usage Desc #001: RUH Attributes: Unused 00:09:23.864 RUH Usage Desc #002: RUH Attributes: Unused 00:09:23.864 RUH Usage Desc #003: RUH Attributes: Unused 00:09:23.864 RUH Usage Desc #004: RUH Attributes: Unused 00:09:23.864 RUH Usage Desc #005: RUH Attributes: Unused 00:09:23.864 RUH Usage Desc #006: RUH Attributes: Unused 00:09:23.864 RUH Usage Desc #007: RUH Attributes: Unused 00:09:23.864 00:09:23.864 FDP statistics log page 00:09:23.864 ======================= 00:09:23.864 Host bytes with metadata written: 504864768 00:09:23.864 Media bytes with metadata written: 504922112 00:09:23.864 Media bytes erased: 0 00:09:23.864 00:09:23.864 FDP events log page 00:09:23.864 =================== 00:09:23.864 Number of FDP events: 0 00:09:23.864 00:09:23.864 NVM Specific Namespace Data 00:09:23.864 =========================== 00:09:23.864 Logical Block Storage Tag Mask: 0 00:09:23.864 Protection Information Capabilities: 00:09:23.864 16b Guard Protection Information Storage Tag Support: No 00:09:23.864 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:23.864 Storage Tag Check Read Support: No 00:09:23.864 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.864 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.864 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.864 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.864 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.864 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.864 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.864 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:23.864 00:09:23.864 real 0m1.593s 00:09:23.864 user 0m0.583s 00:09:23.864 sys 0m0.816s 00:09:23.864 04:33:13 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.864 04:33:13 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:23.864 ************************************ 00:09:23.864 END TEST nvme_identify 00:09:23.864 ************************************ 00:09:23.864 04:33:13 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:23.864 04:33:13 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.864 04:33:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.864 04:33:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.122 ************************************ 00:09:24.122 START TEST nvme_perf 00:09:24.122 ************************************ 00:09:24.122 04:33:13 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:09:24.122 04:33:13 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:25.499 Initializing NVMe Controllers 00:09:25.499 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:25.499 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:25.499 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:25.499 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:25.499 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:25.499 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:25.499 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:25.499 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:25.499 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:25.499 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:25.499 Initialization complete. Launching workers. 00:09:25.499 ======================================================== 00:09:25.499 Latency(us) 00:09:25.499 Device Information : IOPS MiB/s Average min max 00:09:25.499 PCIE (0000:00:10.0) NSID 1 from core 0: 13372.40 156.71 9591.25 7859.75 51800.65 00:09:25.499 PCIE (0000:00:11.0) NSID 1 from core 0: 13436.39 157.46 9531.01 7600.87 44298.11 00:09:25.499 PCIE (0000:00:13.0) NSID 1 from core 0: 13436.39 157.46 9515.37 7957.62 43002.66 00:09:25.499 PCIE (0000:00:12.0) NSID 1 from core 0: 13436.39 157.46 9499.21 7958.62 41148.30 00:09:25.499 PCIE (0000:00:12.0) NSID 2 from core 0: 13436.39 157.46 9483.64 7959.08 39307.09 00:09:25.499 PCIE (0000:00:12.0) NSID 3 from core 0: 13436.39 157.46 9468.12 7957.07 37414.25 00:09:25.499 ======================================================== 00:09:25.499 Total : 80554.33 944.00 9514.71 7600.87 51800.65 00:09:25.499 00:09:25.499 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:25.499 ================================================================================= 00:09:25.499 1.00000% : 8106.461us 00:09:25.499 10.00000% : 8369.658us 00:09:25.499 25.00000% : 8632.855us 00:09:25.499 50.00000% : 9001.330us 00:09:25.499 75.00000% : 9369.806us 00:09:25.499 90.00000% : 10001.478us 00:09:25.499 95.00000% : 12107.052us 00:09:25.499 98.00000% : 16107.643us 00:09:25.499 99.00000% : 17265.709us 00:09:25.499 99.50000% : 44848.733us 00:09:25.499 99.90000% : 51376.013us 00:09:25.499 99.99000% : 51797.128us 00:09:25.499 99.99900% : 52007.685us 00:09:25.499 99.99990% : 52007.685us 00:09:25.499 99.99999% : 52007.685us 00:09:25.499 00:09:25.499 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:25.499 ================================================================================= 00:09:25.499 1.00000% : 8159.100us 00:09:25.499 10.00000% : 8422.297us 00:09:25.499 25.00000% : 8685.494us 00:09:25.499 50.00000% : 9001.330us 00:09:25.499 75.00000% : 9369.806us 00:09:25.499 90.00000% : 10054.117us 00:09:25.499 95.00000% : 12370.249us 00:09:25.499 98.00000% : 15581.250us 00:09:25.499 99.00000% : 17581.545us 00:09:25.499 99.50000% : 37479.222us 00:09:25.499 99.90000% : 44006.503us 00:09:25.499 99.99000% : 44427.618us 00:09:25.499 99.99900% : 44427.618us 00:09:25.499 99.99990% : 44427.618us 00:09:25.499 99.99999% : 44427.618us 00:09:25.499 00:09:25.499 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:25.499 ================================================================================= 00:09:25.499 1.00000% : 8159.100us 00:09:25.499 10.00000% : 8422.297us 00:09:25.499 25.00000% : 8685.494us 00:09:25.499 50.00000% : 8948.691us 00:09:25.499 75.00000% : 9369.806us 00:09:25.499 90.00000% : 9948.839us 00:09:25.499 95.00000% : 12264.970us 00:09:25.499 98.00000% : 15160.135us 00:09:25.499 99.00000% : 18213.218us 00:09:25.499 99.50000% : 35373.648us 00:09:25.499 99.90000% : 42743.158us 00:09:25.499 99.99000% : 43164.273us 00:09:25.499 99.99900% : 43164.273us 00:09:25.499 99.99990% : 43164.273us 00:09:25.499 99.99999% : 43164.273us 00:09:25.499 00:09:25.499 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:25.499 ================================================================================= 00:09:25.499 1.00000% : 8159.100us 00:09:25.499 10.00000% : 8422.297us 00:09:25.499 25.00000% : 8685.494us 00:09:25.499 50.00000% : 9001.330us 00:09:25.499 75.00000% : 9369.806us 00:09:25.499 90.00000% : 9948.839us 00:09:25.499 95.00000% : 12054.413us 00:09:25.499 98.00000% : 15475.971us 00:09:25.499 99.00000% : 18739.611us 00:09:25.499 99.50000% : 33899.746us 00:09:25.499 99.90000% : 40848.141us 00:09:25.499 99.99000% : 41269.256us 00:09:25.499 99.99900% : 41269.256us 00:09:25.499 99.99990% : 41269.256us 00:09:25.499 99.99999% : 41269.256us 00:09:25.499 00:09:25.499 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:25.499 ================================================================================= 00:09:25.499 1.00000% : 8159.100us 00:09:25.499 10.00000% : 8422.297us 00:09:25.499 25.00000% : 8685.494us 00:09:25.499 50.00000% : 9001.330us 00:09:25.499 75.00000% : 9369.806us 00:09:25.499 90.00000% : 10001.478us 00:09:25.499 95.00000% : 12107.052us 00:09:25.499 98.00000% : 15897.086us 00:09:25.499 99.00000% : 17581.545us 00:09:25.499 99.50000% : 32215.287us 00:09:25.499 99.90000% : 38953.124us 00:09:25.499 99.99000% : 39374.239us 00:09:25.499 99.99900% : 39374.239us 00:09:25.499 99.99990% : 39374.239us 00:09:25.499 99.99999% : 39374.239us 00:09:25.499 00:09:25.499 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:25.499 ================================================================================= 00:09:25.499 1.00000% : 8159.100us 00:09:25.499 10.00000% : 8422.297us 00:09:25.499 25.00000% : 8685.494us 00:09:25.499 50.00000% : 9001.330us 00:09:25.499 75.00000% : 9369.806us 00:09:25.499 90.00000% : 10001.478us 00:09:25.499 95.00000% : 12264.970us 00:09:25.499 98.00000% : 16107.643us 00:09:25.499 99.00000% : 17160.431us 00:09:25.499 99.50000% : 30320.270us 00:09:25.499 99.90000% : 37058.108us 00:09:25.499 99.99000% : 37479.222us 00:09:25.499 99.99900% : 37479.222us 00:09:25.499 99.99990% : 37479.222us 00:09:25.499 99.99999% : 37479.222us 00:09:25.499 00:09:25.499 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:25.499 ============================================================================== 00:09:25.499 Range in us Cumulative IO count 00:09:25.499 7843.264 - 7895.904: 0.0299% ( 4) 00:09:25.499 7895.904 - 7948.543: 0.0822% ( 7) 00:09:25.499 7948.543 - 8001.182: 0.3888% ( 41) 00:09:25.499 8001.182 - 8053.822: 0.9644% ( 77) 00:09:25.499 8053.822 - 8106.461: 1.9064% ( 126) 00:09:25.499 8106.461 - 8159.100: 3.1026% ( 160) 00:09:25.499 8159.100 - 8211.740: 4.6202% ( 203) 00:09:25.499 8211.740 - 8264.379: 6.4294% ( 242) 00:09:25.499 8264.379 - 8317.018: 8.5676% ( 286) 00:09:25.499 8317.018 - 8369.658: 10.7955% ( 298) 00:09:25.499 8369.658 - 8422.297: 13.3224% ( 338) 00:09:25.499 8422.297 - 8474.937: 16.0287% ( 362) 00:09:25.499 8474.937 - 8527.576: 18.9892% ( 396) 00:09:25.499 8527.576 - 8580.215: 22.0993% ( 416) 00:09:25.499 8580.215 - 8632.855: 25.5308% ( 459) 00:09:25.499 8632.855 - 8685.494: 29.2688% ( 500) 00:09:25.499 8685.494 - 8738.133: 33.0966% ( 512) 00:09:25.499 8738.133 - 8790.773: 36.9692% ( 518) 00:09:25.499 8790.773 - 8843.412: 41.0885% ( 551) 00:09:25.499 8843.412 - 8896.051: 45.2452% ( 556) 00:09:25.499 8896.051 - 8948.691: 49.3272% ( 546) 00:09:25.499 8948.691 - 9001.330: 53.4465% ( 551) 00:09:25.499 9001.330 - 9053.969: 57.1696% ( 498) 00:09:25.499 9053.969 - 9106.609: 60.8104% ( 487) 00:09:25.499 9106.609 - 9159.248: 64.1298% ( 444) 00:09:25.499 9159.248 - 9211.888: 67.0604% ( 392) 00:09:25.499 9211.888 - 9264.527: 70.0583% ( 401) 00:09:25.499 9264.527 - 9317.166: 72.6899% ( 352) 00:09:25.499 9317.166 - 9369.806: 75.2467% ( 342) 00:09:25.499 9369.806 - 9422.445: 77.3624% ( 283) 00:09:25.499 9422.445 - 9475.084: 79.3660% ( 268) 00:09:25.499 9475.084 - 9527.724: 81.1678% ( 241) 00:09:25.499 9527.724 - 9580.363: 82.8050% ( 219) 00:09:25.499 9580.363 - 9633.002: 84.2554% ( 194) 00:09:25.499 9633.002 - 9685.642: 85.6086% ( 181) 00:09:25.499 9685.642 - 9738.281: 86.7972% ( 159) 00:09:25.499 9738.281 - 9790.920: 87.7093% ( 122) 00:09:25.499 9790.920 - 9843.560: 88.4794% ( 103) 00:09:25.499 9843.560 - 9896.199: 89.2494% ( 103) 00:09:25.499 9896.199 - 9948.839: 89.8026% ( 74) 00:09:25.499 9948.839 - 10001.478: 90.3260% ( 70) 00:09:25.499 10001.478 - 10054.117: 90.7147% ( 52) 00:09:25.499 10054.117 - 10106.757: 91.0362% ( 43) 00:09:25.499 10106.757 - 10159.396: 91.3278% ( 39) 00:09:25.499 10159.396 - 10212.035: 91.5969% ( 36) 00:09:25.499 10212.035 - 10264.675: 91.8660% ( 36) 00:09:25.499 10264.675 - 10317.314: 92.0754% ( 28) 00:09:25.499 10317.314 - 10369.953: 92.2324% ( 21) 00:09:25.499 10369.953 - 10422.593: 92.3968% ( 22) 00:09:25.499 10422.593 - 10475.232: 92.5538% ( 21) 00:09:25.499 10475.232 - 10527.871: 92.6734% ( 16) 00:09:25.499 10527.871 - 10580.511: 92.8080% ( 18) 00:09:25.499 10580.511 - 10633.150: 92.9052% ( 13) 00:09:25.499 10633.150 - 10685.790: 93.0248% ( 16) 00:09:25.499 10685.790 - 10738.429: 93.1145% ( 12) 00:09:25.499 10738.429 - 10791.068: 93.2192% ( 14) 00:09:25.499 10791.068 - 10843.708: 93.2940% ( 10) 00:09:25.499 10843.708 - 10896.347: 93.3687% ( 10) 00:09:25.499 10896.347 - 10948.986: 93.4809% ( 15) 00:09:25.499 10948.986 - 11001.626: 93.5706% ( 12) 00:09:25.499 11001.626 - 11054.265: 93.6678% ( 13) 00:09:25.499 11054.265 - 11106.904: 93.7500% ( 11) 00:09:25.499 11106.904 - 11159.544: 93.8248% ( 10) 00:09:25.499 11159.544 - 11212.183: 93.8771% ( 7) 00:09:25.499 11212.183 - 11264.822: 93.9444% ( 9) 00:09:25.500 11264.822 - 11317.462: 94.0117% ( 9) 00:09:25.500 11317.462 - 11370.101: 94.0715% ( 8) 00:09:25.500 11370.101 - 11422.741: 94.1612% ( 12) 00:09:25.500 11422.741 - 11475.380: 94.2434% ( 11) 00:09:25.500 11475.380 - 11528.019: 94.3032% ( 8) 00:09:25.500 11528.019 - 11580.659: 94.3705% ( 9) 00:09:25.500 11580.659 - 11633.298: 94.4228% ( 7) 00:09:25.500 11633.298 - 11685.937: 94.4752% ( 7) 00:09:25.500 11685.937 - 11738.577: 94.5350% ( 8) 00:09:25.500 11738.577 - 11791.216: 94.5948% ( 8) 00:09:25.500 11791.216 - 11843.855: 94.6696% ( 10) 00:09:25.500 11843.855 - 11896.495: 94.7518% ( 11) 00:09:25.500 11896.495 - 11949.134: 94.8116% ( 8) 00:09:25.500 11949.134 - 12001.773: 94.8639% ( 7) 00:09:25.500 12001.773 - 12054.413: 94.9611% ( 13) 00:09:25.500 12054.413 - 12107.052: 95.0209% ( 8) 00:09:25.500 12107.052 - 12159.692: 95.0807% ( 8) 00:09:25.500 12159.692 - 12212.331: 95.1555% ( 10) 00:09:25.500 12212.331 - 12264.970: 95.2153% ( 8) 00:09:25.500 12264.970 - 12317.610: 95.2975% ( 11) 00:09:25.500 12317.610 - 12370.249: 95.3574% ( 8) 00:09:25.500 12370.249 - 12422.888: 95.4321% ( 10) 00:09:25.500 12422.888 - 12475.528: 95.4620% ( 4) 00:09:25.500 12475.528 - 12528.167: 95.5069% ( 6) 00:09:25.500 12528.167 - 12580.806: 95.5592% ( 7) 00:09:25.500 12580.806 - 12633.446: 95.6190% ( 8) 00:09:25.500 12633.446 - 12686.085: 95.6714% ( 7) 00:09:25.500 12686.085 - 12738.724: 95.7312% ( 8) 00:09:25.500 12738.724 - 12791.364: 95.7536% ( 3) 00:09:25.500 12791.364 - 12844.003: 95.8059% ( 7) 00:09:25.500 12844.003 - 12896.643: 95.8657% ( 8) 00:09:25.500 12896.643 - 12949.282: 95.9330% ( 9) 00:09:25.500 12949.282 - 13001.921: 95.9779% ( 6) 00:09:25.500 13001.921 - 13054.561: 96.0526% ( 10) 00:09:25.500 13054.561 - 13107.200: 96.1050% ( 7) 00:09:25.500 13107.200 - 13159.839: 96.1648% ( 8) 00:09:25.500 13159.839 - 13212.479: 96.2321% ( 9) 00:09:25.500 13212.479 - 13265.118: 96.2919% ( 8) 00:09:25.500 13265.118 - 13317.757: 96.3292% ( 5) 00:09:25.500 13317.757 - 13370.397: 96.3592% ( 4) 00:09:25.500 13370.397 - 13423.036: 96.4040% ( 6) 00:09:25.500 13423.036 - 13475.676: 96.4264% ( 3) 00:09:25.500 13475.676 - 13580.954: 96.5161% ( 12) 00:09:25.500 13580.954 - 13686.233: 96.5984% ( 11) 00:09:25.500 13686.233 - 13791.512: 96.6432% ( 6) 00:09:25.500 13791.512 - 13896.790: 96.7031% ( 8) 00:09:25.500 13896.790 - 14002.069: 96.7778% ( 10) 00:09:25.500 14002.069 - 14107.348: 96.8376% ( 8) 00:09:25.500 14107.348 - 14212.627: 96.9049% ( 9) 00:09:25.500 14212.627 - 14317.905: 96.9722% ( 9) 00:09:25.500 14317.905 - 14423.184: 97.0469% ( 10) 00:09:25.500 14423.184 - 14528.463: 97.0993% ( 7) 00:09:25.500 14528.463 - 14633.741: 97.1591% ( 8) 00:09:25.500 14633.741 - 14739.020: 97.2488% ( 12) 00:09:25.500 14739.020 - 14844.299: 97.2787% ( 4) 00:09:25.500 14844.299 - 14949.578: 97.3236% ( 6) 00:09:25.500 14949.578 - 15054.856: 97.3609% ( 5) 00:09:25.500 15054.856 - 15160.135: 97.3983% ( 5) 00:09:25.500 15160.135 - 15265.414: 97.4357% ( 5) 00:09:25.500 15265.414 - 15370.692: 97.4731% ( 5) 00:09:25.500 15370.692 - 15475.971: 97.5254% ( 7) 00:09:25.500 15475.971 - 15581.250: 97.6002% ( 10) 00:09:25.500 15581.250 - 15686.529: 97.6974% ( 13) 00:09:25.500 15686.529 - 15791.807: 97.7796% ( 11) 00:09:25.500 15791.807 - 15897.086: 97.8469% ( 9) 00:09:25.500 15897.086 - 16002.365: 97.9516% ( 14) 00:09:25.500 16002.365 - 16107.643: 98.0413% ( 12) 00:09:25.500 16107.643 - 16212.922: 98.1908% ( 20) 00:09:25.500 16212.922 - 16318.201: 98.3254% ( 18) 00:09:25.500 16318.201 - 16423.480: 98.4450% ( 16) 00:09:25.500 16423.480 - 16528.758: 98.5571% ( 15) 00:09:25.500 16528.758 - 16634.037: 98.6693% ( 15) 00:09:25.500 16634.037 - 16739.316: 98.7515% ( 11) 00:09:25.500 16739.316 - 16844.594: 98.8337% ( 11) 00:09:25.500 16844.594 - 16949.873: 98.9010% ( 9) 00:09:25.500 16949.873 - 17055.152: 98.9533% ( 7) 00:09:25.500 17055.152 - 17160.431: 98.9982% ( 6) 00:09:25.500 17160.431 - 17265.709: 99.0431% ( 6) 00:09:25.500 42532.601 - 42743.158: 99.0505% ( 1) 00:09:25.500 42743.158 - 42953.716: 99.1029% ( 7) 00:09:25.500 42953.716 - 43164.273: 99.1477% ( 6) 00:09:25.500 43164.273 - 43374.831: 99.1926% ( 6) 00:09:25.500 43374.831 - 43585.388: 99.2449% ( 7) 00:09:25.500 43585.388 - 43795.945: 99.2898% ( 6) 00:09:25.500 43795.945 - 44006.503: 99.3272% ( 5) 00:09:25.500 44006.503 - 44217.060: 99.3795% ( 7) 00:09:25.500 44217.060 - 44427.618: 99.4318% ( 7) 00:09:25.500 44427.618 - 44638.175: 99.4767% ( 6) 00:09:25.500 44638.175 - 44848.733: 99.5215% ( 6) 00:09:25.500 49480.996 - 49691.553: 99.5514% ( 4) 00:09:25.500 49691.553 - 49902.111: 99.6038% ( 7) 00:09:25.500 49902.111 - 50112.668: 99.6561% ( 7) 00:09:25.500 50112.668 - 50323.226: 99.6935% ( 5) 00:09:25.500 50323.226 - 50533.783: 99.7309% ( 5) 00:09:25.500 50533.783 - 50744.341: 99.7757% ( 6) 00:09:25.500 50744.341 - 50954.898: 99.8131% ( 5) 00:09:25.500 50954.898 - 51165.455: 99.8580% ( 6) 00:09:25.500 51165.455 - 51376.013: 99.9178% ( 8) 00:09:25.500 51376.013 - 51586.570: 99.9626% ( 6) 00:09:25.500 51586.570 - 51797.128: 99.9925% ( 4) 00:09:25.500 51797.128 - 52007.685: 100.0000% ( 1) 00:09:25.500 00:09:25.500 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:25.500 ============================================================================== 00:09:25.500 Range in us Cumulative IO count 00:09:25.500 7580.067 - 7632.707: 0.0372% ( 5) 00:09:25.500 7632.707 - 7685.346: 0.0967% ( 8) 00:09:25.500 7685.346 - 7737.986: 0.1265% ( 4) 00:09:25.500 7737.986 - 7790.625: 0.1339% ( 1) 00:09:25.500 7790.625 - 7843.264: 0.1562% ( 3) 00:09:25.500 7843.264 - 7895.904: 0.1860% ( 4) 00:09:25.500 7895.904 - 7948.543: 0.2158% ( 4) 00:09:25.500 7948.543 - 8001.182: 0.2902% ( 10) 00:09:25.500 8001.182 - 8053.822: 0.4985% ( 28) 00:09:25.500 8053.822 - 8106.461: 0.9077% ( 55) 00:09:25.500 8106.461 - 8159.100: 1.7560% ( 114) 00:09:25.500 8159.100 - 8211.740: 2.9390% ( 159) 00:09:25.500 8211.740 - 8264.379: 4.5312% ( 214) 00:09:25.500 8264.379 - 8317.018: 6.4881% ( 263) 00:09:25.500 8317.018 - 8369.658: 8.8170% ( 313) 00:09:25.500 8369.658 - 8422.297: 11.4062% ( 348) 00:09:25.500 8422.297 - 8474.937: 14.2039% ( 376) 00:09:25.500 8474.937 - 8527.576: 17.2917% ( 415) 00:09:25.500 8527.576 - 8580.215: 20.6324% ( 449) 00:09:25.500 8580.215 - 8632.855: 24.3304% ( 497) 00:09:25.500 8632.855 - 8685.494: 28.2440% ( 526) 00:09:25.500 8685.494 - 8738.133: 32.4702% ( 568) 00:09:25.500 8738.133 - 8790.773: 36.7262% ( 572) 00:09:25.500 8790.773 - 8843.412: 41.0417% ( 580) 00:09:25.500 8843.412 - 8896.051: 45.4688% ( 595) 00:09:25.500 8896.051 - 8948.691: 49.7619% ( 577) 00:09:25.500 8948.691 - 9001.330: 53.9286% ( 560) 00:09:25.500 9001.330 - 9053.969: 57.8125% ( 522) 00:09:25.500 9053.969 - 9106.609: 61.4807% ( 493) 00:09:25.500 9106.609 - 9159.248: 64.7396% ( 438) 00:09:25.500 9159.248 - 9211.888: 67.7827% ( 409) 00:09:25.500 9211.888 - 9264.527: 70.6250% ( 382) 00:09:25.500 9264.527 - 9317.166: 73.3705% ( 369) 00:09:25.500 9317.166 - 9369.806: 75.8110% ( 328) 00:09:25.500 9369.806 - 9422.445: 78.0060% ( 295) 00:09:25.500 9422.445 - 9475.084: 80.0446% ( 274) 00:09:25.500 9475.084 - 9527.724: 81.9271% ( 253) 00:09:25.500 9527.724 - 9580.363: 83.4598% ( 206) 00:09:25.500 9580.363 - 9633.002: 84.7247% ( 170) 00:09:25.500 9633.002 - 9685.642: 85.9375% ( 163) 00:09:25.500 9685.642 - 9738.281: 86.9494% ( 136) 00:09:25.500 9738.281 - 9790.920: 87.7530% ( 108) 00:09:25.500 9790.920 - 9843.560: 88.4301% ( 91) 00:09:25.500 9843.560 - 9896.199: 88.9658% ( 72) 00:09:25.500 9896.199 - 9948.839: 89.4792% ( 69) 00:09:25.500 9948.839 - 10001.478: 89.9033% ( 57) 00:09:25.500 10001.478 - 10054.117: 90.2679% ( 49) 00:09:25.500 10054.117 - 10106.757: 90.6771% ( 55) 00:09:25.500 10106.757 - 10159.396: 90.9970% ( 43) 00:09:25.500 10159.396 - 10212.035: 91.2798% ( 38) 00:09:25.500 10212.035 - 10264.675: 91.5030% ( 30) 00:09:25.500 10264.675 - 10317.314: 91.6890% ( 25) 00:09:25.500 10317.314 - 10369.953: 91.8676% ( 24) 00:09:25.500 10369.953 - 10422.593: 92.0089% ( 19) 00:09:25.500 10422.593 - 10475.232: 92.1280% ( 16) 00:09:25.500 10475.232 - 10527.871: 92.2396% ( 15) 00:09:25.500 10527.871 - 10580.511: 92.3661% ( 17) 00:09:25.500 10580.511 - 10633.150: 92.5000% ( 18) 00:09:25.500 10633.150 - 10685.790: 92.6116% ( 15) 00:09:25.500 10685.790 - 10738.429: 92.7455% ( 18) 00:09:25.500 10738.429 - 10791.068: 92.8720% ( 17) 00:09:25.500 10791.068 - 10843.708: 92.9836% ( 15) 00:09:25.500 10843.708 - 10896.347: 93.0952% ( 15) 00:09:25.500 10896.347 - 10948.986: 93.1994% ( 14) 00:09:25.500 10948.986 - 11001.626: 93.3259% ( 17) 00:09:25.500 11001.626 - 11054.265: 93.4375% ( 15) 00:09:25.500 11054.265 - 11106.904: 93.5342% ( 13) 00:09:25.500 11106.904 - 11159.544: 93.6458% ( 15) 00:09:25.500 11159.544 - 11212.183: 93.7574% ( 15) 00:09:25.500 11212.183 - 11264.822: 93.8393% ( 11) 00:09:25.500 11264.822 - 11317.462: 93.9137% ( 10) 00:09:25.500 11317.462 - 11370.101: 94.0030% ( 12) 00:09:25.500 11370.101 - 11422.741: 94.0774% ( 10) 00:09:25.500 11422.741 - 11475.380: 94.1369% ( 8) 00:09:25.500 11475.380 - 11528.019: 94.1964% ( 8) 00:09:25.500 11528.019 - 11580.659: 94.2708% ( 10) 00:09:25.500 11580.659 - 11633.298: 94.3452% ( 10) 00:09:25.500 11633.298 - 11685.937: 94.3973% ( 7) 00:09:25.500 11685.937 - 11738.577: 94.4494% ( 7) 00:09:25.500 11738.577 - 11791.216: 94.5015% ( 7) 00:09:25.500 11791.216 - 11843.855: 94.5461% ( 6) 00:09:25.500 11843.855 - 11896.495: 94.5982% ( 7) 00:09:25.500 11896.495 - 11949.134: 94.6354% ( 5) 00:09:25.500 11949.134 - 12001.773: 94.6652% ( 4) 00:09:25.500 12001.773 - 12054.413: 94.7024% ( 5) 00:09:25.500 12054.413 - 12107.052: 94.7396% ( 5) 00:09:25.500 12107.052 - 12159.692: 94.7768% ( 5) 00:09:25.500 12159.692 - 12212.331: 94.8065% ( 4) 00:09:25.500 12212.331 - 12264.970: 94.8586% ( 7) 00:09:25.500 12264.970 - 12317.610: 94.9479% ( 12) 00:09:25.500 12317.610 - 12370.249: 95.0223% ( 10) 00:09:25.500 12370.249 - 12422.888: 95.0818% ( 8) 00:09:25.500 12422.888 - 12475.528: 95.1488% ( 9) 00:09:25.500 12475.528 - 12528.167: 95.2381% ( 12) 00:09:25.500 12528.167 - 12580.806: 95.3274% ( 12) 00:09:25.500 12580.806 - 12633.446: 95.4315% ( 14) 00:09:25.500 12633.446 - 12686.085: 95.5134% ( 11) 00:09:25.500 12686.085 - 12738.724: 95.5952% ( 11) 00:09:25.500 12738.724 - 12791.364: 95.6920% ( 13) 00:09:25.500 12791.364 - 12844.003: 95.7812% ( 12) 00:09:25.500 12844.003 - 12896.643: 95.8705% ( 12) 00:09:25.500 12896.643 - 12949.282: 95.9375% ( 9) 00:09:25.500 12949.282 - 13001.921: 95.9970% ( 8) 00:09:25.500 13001.921 - 13054.561: 96.0565% ( 8) 00:09:25.500 13054.561 - 13107.200: 96.1086% ( 7) 00:09:25.500 13107.200 - 13159.839: 96.1384% ( 4) 00:09:25.500 13159.839 - 13212.479: 96.1682% ( 4) 00:09:25.500 13212.479 - 13265.118: 96.1905% ( 3) 00:09:25.500 13265.118 - 13317.757: 96.2202% ( 4) 00:09:25.500 13317.757 - 13370.397: 96.2351% ( 2) 00:09:25.500 13370.397 - 13423.036: 96.2649% ( 4) 00:09:25.500 13423.036 - 13475.676: 96.2946% ( 4) 00:09:25.500 13475.676 - 13580.954: 96.3690% ( 10) 00:09:25.500 13580.954 - 13686.233: 96.4509% ( 11) 00:09:25.500 13686.233 - 13791.512: 96.5253% ( 10) 00:09:25.500 13791.512 - 13896.790: 96.6443% ( 16) 00:09:25.500 13896.790 - 14002.069: 96.7634% ( 16) 00:09:25.500 14002.069 - 14107.348: 96.8676% ( 14) 00:09:25.500 14107.348 - 14212.627: 96.9568% ( 12) 00:09:25.500 14212.627 - 14317.905: 97.0164% ( 8) 00:09:25.500 14317.905 - 14423.184: 97.0685% ( 7) 00:09:25.500 14423.184 - 14528.463: 97.1280% ( 8) 00:09:25.500 14528.463 - 14633.741: 97.1801% ( 7) 00:09:25.500 14633.741 - 14739.020: 97.2396% ( 8) 00:09:25.500 14739.020 - 14844.299: 97.3140% ( 10) 00:09:25.500 14844.299 - 14949.578: 97.3958% ( 11) 00:09:25.500 14949.578 - 15054.856: 97.4777% ( 11) 00:09:25.500 15054.856 - 15160.135: 97.5967% ( 16) 00:09:25.500 15160.135 - 15265.414: 97.7158% ( 16) 00:09:25.500 15265.414 - 15370.692: 97.8348% ( 16) 00:09:25.500 15370.692 - 15475.971: 97.9539% ( 16) 00:09:25.500 15475.971 - 15581.250: 98.0283% ( 10) 00:09:25.500 15581.250 - 15686.529: 98.1027% ( 10) 00:09:25.500 15686.529 - 15791.807: 98.1696% ( 9) 00:09:25.500 15791.807 - 15897.086: 98.2366% ( 9) 00:09:25.500 15897.086 - 16002.365: 98.2961% ( 8) 00:09:25.500 16002.365 - 16107.643: 98.3557% ( 8) 00:09:25.500 16107.643 - 16212.922: 98.4226% ( 9) 00:09:25.500 16212.922 - 16318.201: 98.4375% ( 2) 00:09:25.500 16318.201 - 16423.480: 98.4598% ( 3) 00:09:25.500 16423.480 - 16528.758: 98.4821% ( 3) 00:09:25.500 16528.758 - 16634.037: 98.5119% ( 4) 00:09:25.500 16634.037 - 16739.316: 98.5863% ( 10) 00:09:25.500 16739.316 - 16844.594: 98.6533% ( 9) 00:09:25.500 16844.594 - 16949.873: 98.7202% ( 9) 00:09:25.500 16949.873 - 17055.152: 98.7798% ( 8) 00:09:25.500 17055.152 - 17160.431: 98.8318% ( 7) 00:09:25.500 17160.431 - 17265.709: 98.8839% ( 7) 00:09:25.500 17265.709 - 17370.988: 98.9360% ( 7) 00:09:25.500 17370.988 - 17476.267: 98.9881% ( 7) 00:09:25.500 17476.267 - 17581.545: 99.0476% ( 8) 00:09:25.500 35373.648 - 35584.206: 99.0625% ( 2) 00:09:25.500 35584.206 - 35794.763: 99.1146% ( 7) 00:09:25.500 35794.763 - 36005.320: 99.1592% ( 6) 00:09:25.500 36005.320 - 36215.878: 99.2039% ( 6) 00:09:25.500 36215.878 - 36426.435: 99.2560% ( 7) 00:09:25.500 36426.435 - 36636.993: 99.3155% ( 8) 00:09:25.500 36636.993 - 36847.550: 99.3750% ( 8) 00:09:25.500 36847.550 - 37058.108: 99.4271% ( 7) 00:09:25.501 37058.108 - 37268.665: 99.4792% ( 7) 00:09:25.501 37268.665 - 37479.222: 99.5238% ( 6) 00:09:25.501 42111.486 - 42322.043: 99.5387% ( 2) 00:09:25.501 42322.043 - 42532.601: 99.5833% ( 6) 00:09:25.501 42532.601 - 42743.158: 99.6429% ( 8) 00:09:25.501 42743.158 - 42953.716: 99.6875% ( 6) 00:09:25.501 42953.716 - 43164.273: 99.7321% ( 6) 00:09:25.501 43164.273 - 43374.831: 99.7842% ( 7) 00:09:25.501 43374.831 - 43585.388: 99.8363% ( 7) 00:09:25.501 43585.388 - 43795.945: 99.8810% ( 6) 00:09:25.501 43795.945 - 44006.503: 99.9330% ( 7) 00:09:25.501 44006.503 - 44217.060: 99.9777% ( 6) 00:09:25.501 44217.060 - 44427.618: 100.0000% ( 3) 00:09:25.501 00:09:25.501 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:25.501 ============================================================================== 00:09:25.501 Range in us Cumulative IO count 00:09:25.501 7948.543 - 8001.182: 0.0744% ( 10) 00:09:25.501 8001.182 - 8053.822: 0.2679% ( 26) 00:09:25.501 8053.822 - 8106.461: 0.7366% ( 63) 00:09:25.501 8106.461 - 8159.100: 1.5179% ( 105) 00:09:25.501 8159.100 - 8211.740: 2.7083% ( 160) 00:09:25.501 8211.740 - 8264.379: 4.2039% ( 201) 00:09:25.501 8264.379 - 8317.018: 6.2500% ( 275) 00:09:25.501 8317.018 - 8369.658: 8.6607% ( 324) 00:09:25.501 8369.658 - 8422.297: 11.2872% ( 353) 00:09:25.501 8422.297 - 8474.937: 14.0476% ( 371) 00:09:25.501 8474.937 - 8527.576: 17.0833% ( 408) 00:09:25.501 8527.576 - 8580.215: 20.4092% ( 447) 00:09:25.501 8580.215 - 8632.855: 23.9211% ( 472) 00:09:25.501 8632.855 - 8685.494: 27.8125% ( 523) 00:09:25.501 8685.494 - 8738.133: 31.9494% ( 556) 00:09:25.501 8738.133 - 8790.773: 36.5179% ( 614) 00:09:25.501 8790.773 - 8843.412: 40.9821% ( 600) 00:09:25.501 8843.412 - 8896.051: 45.5357% ( 612) 00:09:25.501 8896.051 - 8948.691: 50.0074% ( 601) 00:09:25.501 8948.691 - 9001.330: 54.3304% ( 581) 00:09:25.501 9001.330 - 9053.969: 58.2366% ( 525) 00:09:25.501 9053.969 - 9106.609: 61.8452% ( 485) 00:09:25.501 9106.609 - 9159.248: 65.1562% ( 445) 00:09:25.501 9159.248 - 9211.888: 68.2366% ( 414) 00:09:25.501 9211.888 - 9264.527: 71.0863% ( 383) 00:09:25.501 9264.527 - 9317.166: 73.7649% ( 360) 00:09:25.501 9317.166 - 9369.806: 76.2946% ( 340) 00:09:25.501 9369.806 - 9422.445: 78.6235% ( 313) 00:09:25.501 9422.445 - 9475.084: 80.6994% ( 279) 00:09:25.501 9475.084 - 9527.724: 82.5149% ( 244) 00:09:25.501 9527.724 - 9580.363: 84.0402% ( 205) 00:09:25.501 9580.363 - 9633.002: 85.4762% ( 193) 00:09:25.501 9633.002 - 9685.642: 86.5774% ( 148) 00:09:25.501 9685.642 - 9738.281: 87.5595% ( 132) 00:09:25.501 9738.281 - 9790.920: 88.3259% ( 103) 00:09:25.501 9790.920 - 9843.560: 88.9881% ( 89) 00:09:25.501 9843.560 - 9896.199: 89.5461% ( 75) 00:09:25.501 9896.199 - 9948.839: 90.0074% ( 62) 00:09:25.501 9948.839 - 10001.478: 90.4539% ( 60) 00:09:25.501 10001.478 - 10054.117: 90.8110% ( 48) 00:09:25.501 10054.117 - 10106.757: 91.1384% ( 44) 00:09:25.501 10106.757 - 10159.396: 91.3914% ( 34) 00:09:25.501 10159.396 - 10212.035: 91.5848% ( 26) 00:09:25.501 10212.035 - 10264.675: 91.7411% ( 21) 00:09:25.501 10264.675 - 10317.314: 91.8676% ( 17) 00:09:25.501 10317.314 - 10369.953: 91.9940% ( 17) 00:09:25.501 10369.953 - 10422.593: 92.1057% ( 15) 00:09:25.501 10422.593 - 10475.232: 92.2173% ( 15) 00:09:25.501 10475.232 - 10527.871: 92.3512% ( 18) 00:09:25.501 10527.871 - 10580.511: 92.4628% ( 15) 00:09:25.501 10580.511 - 10633.150: 92.5670% ( 14) 00:09:25.501 10633.150 - 10685.790: 92.6786% ( 15) 00:09:25.501 10685.790 - 10738.429: 92.7753% ( 13) 00:09:25.501 10738.429 - 10791.068: 92.8646% ( 12) 00:09:25.501 10791.068 - 10843.708: 92.9613% ( 13) 00:09:25.501 10843.708 - 10896.347: 93.0357% ( 10) 00:09:25.501 10896.347 - 10948.986: 93.1250% ( 12) 00:09:25.501 10948.986 - 11001.626: 93.1845% ( 8) 00:09:25.501 11001.626 - 11054.265: 93.2440% ( 8) 00:09:25.501 11054.265 - 11106.904: 93.3110% ( 9) 00:09:25.501 11106.904 - 11159.544: 93.3780% ( 9) 00:09:25.501 11159.544 - 11212.183: 93.4375% ( 8) 00:09:25.501 11212.183 - 11264.822: 93.5268% ( 12) 00:09:25.501 11264.822 - 11317.462: 93.6161% ( 12) 00:09:25.501 11317.462 - 11370.101: 93.7128% ( 13) 00:09:25.501 11370.101 - 11422.741: 93.8021% ( 12) 00:09:25.501 11422.741 - 11475.380: 93.8914% ( 12) 00:09:25.501 11475.380 - 11528.019: 93.9807% ( 12) 00:09:25.501 11528.019 - 11580.659: 94.0774% ( 13) 00:09:25.501 11580.659 - 11633.298: 94.1667% ( 12) 00:09:25.501 11633.298 - 11685.937: 94.2485% ( 11) 00:09:25.501 11685.937 - 11738.577: 94.3304% ( 11) 00:09:25.501 11738.577 - 11791.216: 94.4196% ( 12) 00:09:25.501 11791.216 - 11843.855: 94.5164% ( 13) 00:09:25.501 11843.855 - 11896.495: 94.5908% ( 10) 00:09:25.501 11896.495 - 11949.134: 94.6503% ( 8) 00:09:25.501 11949.134 - 12001.773: 94.6949% ( 6) 00:09:25.501 12001.773 - 12054.413: 94.7545% ( 8) 00:09:25.501 12054.413 - 12107.052: 94.8140% ( 8) 00:09:25.501 12107.052 - 12159.692: 94.8958% ( 11) 00:09:25.501 12159.692 - 12212.331: 94.9479% ( 7) 00:09:25.501 12212.331 - 12264.970: 95.0149% ( 9) 00:09:25.501 12264.970 - 12317.610: 95.0744% ( 8) 00:09:25.501 12317.610 - 12370.249: 95.1339% ( 8) 00:09:25.501 12370.249 - 12422.888: 95.2009% ( 9) 00:09:25.501 12422.888 - 12475.528: 95.2307% ( 4) 00:09:25.501 12475.528 - 12528.167: 95.2530% ( 3) 00:09:25.501 12528.167 - 12580.806: 95.2827% ( 4) 00:09:25.501 12580.806 - 12633.446: 95.3348% ( 7) 00:09:25.501 12633.446 - 12686.085: 95.3795% ( 6) 00:09:25.501 12686.085 - 12738.724: 95.4092% ( 4) 00:09:25.501 12738.724 - 12791.364: 95.4539% ( 6) 00:09:25.501 12791.364 - 12844.003: 95.4836% ( 4) 00:09:25.501 12844.003 - 12896.643: 95.5134% ( 4) 00:09:25.501 12896.643 - 12949.282: 95.5506% ( 5) 00:09:25.501 12949.282 - 13001.921: 95.5804% ( 4) 00:09:25.501 13001.921 - 13054.561: 95.6027% ( 3) 00:09:25.501 13054.561 - 13107.200: 95.6250% ( 3) 00:09:25.501 13107.200 - 13159.839: 95.6548% ( 4) 00:09:25.501 13159.839 - 13212.479: 95.6845% ( 4) 00:09:25.501 13212.479 - 13265.118: 95.7068% ( 3) 00:09:25.501 13265.118 - 13317.757: 95.7366% ( 4) 00:09:25.501 13317.757 - 13370.397: 95.7664% ( 4) 00:09:25.501 13370.397 - 13423.036: 95.7961% ( 4) 00:09:25.501 13423.036 - 13475.676: 95.8259% ( 4) 00:09:25.501 13475.676 - 13580.954: 95.9226% ( 13) 00:09:25.501 13580.954 - 13686.233: 96.0119% ( 12) 00:09:25.501 13686.233 - 13791.512: 96.1384% ( 17) 00:09:25.501 13791.512 - 13896.790: 96.2723% ( 18) 00:09:25.501 13896.790 - 14002.069: 96.4137% ( 19) 00:09:25.501 14002.069 - 14107.348: 96.6518% ( 32) 00:09:25.501 14107.348 - 14212.627: 96.8452% ( 26) 00:09:25.501 14212.627 - 14317.905: 97.0312% ( 25) 00:09:25.501 14317.905 - 14423.184: 97.1875% ( 21) 00:09:25.501 14423.184 - 14528.463: 97.3512% ( 22) 00:09:25.501 14528.463 - 14633.741: 97.5149% ( 22) 00:09:25.501 14633.741 - 14739.020: 97.6786% ( 22) 00:09:25.501 14739.020 - 14844.299: 97.7976% ( 16) 00:09:25.501 14844.299 - 14949.578: 97.8869% ( 12) 00:09:25.501 14949.578 - 15054.856: 97.9613% ( 10) 00:09:25.501 15054.856 - 15160.135: 98.0506% ( 12) 00:09:25.501 15160.135 - 15265.414: 98.1324% ( 11) 00:09:25.501 15265.414 - 15370.692: 98.2143% ( 11) 00:09:25.501 15370.692 - 15475.971: 98.2812% ( 9) 00:09:25.501 15475.971 - 15581.250: 98.3110% ( 4) 00:09:25.501 15581.250 - 15686.529: 98.3408% ( 4) 00:09:25.501 15686.529 - 15791.807: 98.3631% ( 3) 00:09:25.501 15791.807 - 15897.086: 98.3929% ( 4) 00:09:25.501 15897.086 - 16002.365: 98.4152% ( 3) 00:09:25.501 16002.365 - 16107.643: 98.4375% ( 3) 00:09:25.501 16107.643 - 16212.922: 98.4598% ( 3) 00:09:25.501 16212.922 - 16318.201: 98.4896% ( 4) 00:09:25.501 16318.201 - 16423.480: 98.5045% ( 2) 00:09:25.501 16423.480 - 16528.758: 98.5268% ( 3) 00:09:25.501 16528.758 - 16634.037: 98.5565% ( 4) 00:09:25.501 16634.037 - 16739.316: 98.5714% ( 2) 00:09:25.501 17265.709 - 17370.988: 98.6086% ( 5) 00:09:25.501 17370.988 - 17476.267: 98.6533% ( 6) 00:09:25.501 17476.267 - 17581.545: 98.7054% ( 7) 00:09:25.501 17581.545 - 17686.824: 98.7574% ( 7) 00:09:25.501 17686.824 - 17792.103: 98.8170% ( 8) 00:09:25.501 17792.103 - 17897.382: 98.8616% ( 6) 00:09:25.501 17897.382 - 18002.660: 98.9211% ( 8) 00:09:25.501 18002.660 - 18107.939: 98.9732% ( 7) 00:09:25.501 18107.939 - 18213.218: 99.0253% ( 7) 00:09:25.501 18213.218 - 18318.496: 99.0476% ( 3) 00:09:25.501 33478.631 - 33689.189: 99.0923% ( 6) 00:09:25.501 33689.189 - 33899.746: 99.1443% ( 7) 00:09:25.501 33899.746 - 34110.304: 99.1964% ( 7) 00:09:25.501 34110.304 - 34320.861: 99.2485% ( 7) 00:09:25.501 34320.861 - 34531.418: 99.3006% ( 7) 00:09:25.501 34531.418 - 34741.976: 99.3527% ( 7) 00:09:25.501 34741.976 - 34952.533: 99.4122% ( 8) 00:09:25.501 34952.533 - 35163.091: 99.4643% ( 7) 00:09:25.501 35163.091 - 35373.648: 99.5164% ( 7) 00:09:25.501 35373.648 - 35584.206: 99.5238% ( 1) 00:09:25.501 40848.141 - 41058.699: 99.5387% ( 2) 00:09:25.501 41058.699 - 41269.256: 99.5908% ( 7) 00:09:25.501 41269.256 - 41479.814: 99.6429% ( 7) 00:09:25.501 41479.814 - 41690.371: 99.6875% ( 6) 00:09:25.501 41690.371 - 41900.929: 99.7396% ( 7) 00:09:25.501 41900.929 - 42111.486: 99.7842% ( 6) 00:09:25.501 42111.486 - 42322.043: 99.8363% ( 7) 00:09:25.501 42322.043 - 42532.601: 99.8810% ( 6) 00:09:25.501 42532.601 - 42743.158: 99.9330% ( 7) 00:09:25.501 42743.158 - 42953.716: 99.9851% ( 7) 00:09:25.501 42953.716 - 43164.273: 100.0000% ( 2) 00:09:25.501 00:09:25.501 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:25.501 ============================================================================== 00:09:25.501 Range in us Cumulative IO count 00:09:25.501 7948.543 - 8001.182: 0.0595% ( 8) 00:09:25.501 8001.182 - 8053.822: 0.3051% ( 33) 00:09:25.501 8053.822 - 8106.461: 0.7887% ( 65) 00:09:25.501 8106.461 - 8159.100: 1.6220% ( 112) 00:09:25.501 8159.100 - 8211.740: 2.8943% ( 171) 00:09:25.501 8211.740 - 8264.379: 4.4792% ( 213) 00:09:25.501 8264.379 - 8317.018: 6.4658% ( 267) 00:09:25.501 8317.018 - 8369.658: 8.7351% ( 305) 00:09:25.501 8369.658 - 8422.297: 11.3095% ( 346) 00:09:25.501 8422.297 - 8474.937: 14.1443% ( 381) 00:09:25.501 8474.937 - 8527.576: 17.2396% ( 416) 00:09:25.501 8527.576 - 8580.215: 20.6101% ( 453) 00:09:25.501 8580.215 - 8632.855: 24.1741% ( 479) 00:09:25.501 8632.855 - 8685.494: 28.1920% ( 540) 00:09:25.501 8685.494 - 8738.133: 32.4256% ( 569) 00:09:25.501 8738.133 - 8790.773: 36.7113% ( 576) 00:09:25.501 8790.773 - 8843.412: 41.1458% ( 596) 00:09:25.501 8843.412 - 8896.051: 45.5804% ( 596) 00:09:25.501 8896.051 - 8948.691: 49.9182% ( 583) 00:09:25.501 8948.691 - 9001.330: 54.1518% ( 569) 00:09:25.501 9001.330 - 9053.969: 58.1027% ( 531) 00:09:25.501 9053.969 - 9106.609: 61.8155% ( 499) 00:09:25.501 9106.609 - 9159.248: 65.1860% ( 453) 00:09:25.501 9159.248 - 9211.888: 68.3036% ( 419) 00:09:25.501 9211.888 - 9264.527: 71.1682% ( 385) 00:09:25.501 9264.527 - 9317.166: 73.8393% ( 359) 00:09:25.501 9317.166 - 9369.806: 76.4062% ( 345) 00:09:25.501 9369.806 - 9422.445: 78.7277% ( 312) 00:09:25.501 9422.445 - 9475.084: 80.7292% ( 269) 00:09:25.501 9475.084 - 9527.724: 82.5149% ( 240) 00:09:25.501 9527.724 - 9580.363: 84.0848% ( 211) 00:09:25.501 9580.363 - 9633.002: 85.4613% ( 185) 00:09:25.501 9633.002 - 9685.642: 86.6295% ( 157) 00:09:25.501 9685.642 - 9738.281: 87.5521% ( 124) 00:09:25.501 9738.281 - 9790.920: 88.3780% ( 111) 00:09:25.501 9790.920 - 9843.560: 89.0476% ( 90) 00:09:25.501 9843.560 - 9896.199: 89.6057% ( 75) 00:09:25.501 9896.199 - 9948.839: 90.0521% ( 60) 00:09:25.501 9948.839 - 10001.478: 90.4539% ( 54) 00:09:25.501 10001.478 - 10054.117: 90.8185% ( 49) 00:09:25.501 10054.117 - 10106.757: 91.1235% ( 41) 00:09:25.501 10106.757 - 10159.396: 91.3095% ( 25) 00:09:25.501 10159.396 - 10212.035: 91.5030% ( 26) 00:09:25.501 10212.035 - 10264.675: 91.6518% ( 20) 00:09:25.501 10264.675 - 10317.314: 91.8080% ( 21) 00:09:25.501 10317.314 - 10369.953: 91.9568% ( 20) 00:09:25.501 10369.953 - 10422.593: 92.0833% ( 17) 00:09:25.501 10422.593 - 10475.232: 92.1875% ( 14) 00:09:25.501 10475.232 - 10527.871: 92.2917% ( 14) 00:09:25.501 10527.871 - 10580.511: 92.3958% ( 14) 00:09:25.501 10580.511 - 10633.150: 92.4777% ( 11) 00:09:25.501 10633.150 - 10685.790: 92.5670% ( 12) 00:09:25.501 10685.790 - 10738.429: 92.6711% ( 14) 00:09:25.501 10738.429 - 10791.068: 92.7381% ( 9) 00:09:25.501 10791.068 - 10843.708: 92.8051% ( 9) 00:09:25.501 10843.708 - 10896.347: 92.9167% ( 15) 00:09:25.501 10896.347 - 10948.986: 92.9911% ( 10) 00:09:25.501 10948.986 - 11001.626: 93.1176% ( 17) 00:09:25.501 11001.626 - 11054.265: 93.2292% ( 15) 00:09:25.501 11054.265 - 11106.904: 93.3631% ( 18) 00:09:25.501 11106.904 - 11159.544: 93.4747% ( 15) 00:09:25.501 11159.544 - 11212.183: 93.5789% ( 14) 00:09:25.501 11212.183 - 11264.822: 93.6756% ( 13) 00:09:25.501 11264.822 - 11317.462: 93.7649% ( 12) 00:09:25.501 11317.462 - 11370.101: 93.8393% ( 10) 00:09:25.501 11370.101 - 11422.741: 93.9360% ( 13) 00:09:25.501 11422.741 - 11475.380: 94.0253% ( 12) 00:09:25.501 11475.380 - 11528.019: 94.1071% ( 11) 00:09:25.501 11528.019 - 11580.659: 94.1964% ( 12) 00:09:25.501 11580.659 - 11633.298: 94.2932% ( 13) 00:09:25.501 11633.298 - 11685.937: 94.3899% ( 13) 00:09:25.501 11685.937 - 11738.577: 94.4940% ( 14) 00:09:25.501 11738.577 - 11791.216: 94.5908% ( 13) 00:09:25.502 11791.216 - 11843.855: 94.6875% ( 13) 00:09:25.502 11843.855 - 11896.495: 94.7842% ( 13) 00:09:25.502 11896.495 - 11949.134: 94.8810% ( 13) 00:09:25.502 11949.134 - 12001.773: 94.9777% ( 13) 00:09:25.502 12001.773 - 12054.413: 95.0521% ( 10) 00:09:25.502 12054.413 - 12107.052: 95.1339% ( 11) 00:09:25.502 12107.052 - 12159.692: 95.2083% ( 10) 00:09:25.502 12159.692 - 12212.331: 95.2530% ( 6) 00:09:25.502 12212.331 - 12264.970: 95.2902% ( 5) 00:09:25.502 12264.970 - 12317.610: 95.3348% ( 6) 00:09:25.502 12317.610 - 12370.249: 95.3720% ( 5) 00:09:25.502 12370.249 - 12422.888: 95.4092% ( 5) 00:09:25.502 12422.888 - 12475.528: 95.4315% ( 3) 00:09:25.502 12475.528 - 12528.167: 95.4464% ( 2) 00:09:25.502 12528.167 - 12580.806: 95.4613% ( 2) 00:09:25.502 12580.806 - 12633.446: 95.4688% ( 1) 00:09:25.502 12633.446 - 12686.085: 95.4836% ( 2) 00:09:25.502 12686.085 - 12738.724: 95.4985% ( 2) 00:09:25.502 12738.724 - 12791.364: 95.5134% ( 2) 00:09:25.502 12791.364 - 12844.003: 95.5208% ( 1) 00:09:25.502 12844.003 - 12896.643: 95.5357% ( 2) 00:09:25.502 12896.643 - 12949.282: 95.5432% ( 1) 00:09:25.502 12949.282 - 13001.921: 95.5580% ( 2) 00:09:25.502 13001.921 - 13054.561: 95.6176% ( 8) 00:09:25.502 13054.561 - 13107.200: 95.6622% ( 6) 00:09:25.502 13107.200 - 13159.839: 95.6994% ( 5) 00:09:25.502 13159.839 - 13212.479: 95.7589% ( 8) 00:09:25.502 13212.479 - 13265.118: 95.8036% ( 6) 00:09:25.502 13265.118 - 13317.757: 95.8780% ( 10) 00:09:25.502 13317.757 - 13370.397: 95.9598% ( 11) 00:09:25.502 13370.397 - 13423.036: 95.9970% ( 5) 00:09:25.502 13423.036 - 13475.676: 96.0640% ( 9) 00:09:25.502 13475.676 - 13580.954: 96.1830% ( 16) 00:09:25.502 13580.954 - 13686.233: 96.3095% ( 17) 00:09:25.502 13686.233 - 13791.512: 96.4062% ( 13) 00:09:25.502 13791.512 - 13896.790: 96.5030% ( 13) 00:09:25.502 13896.790 - 14002.069: 96.6071% ( 14) 00:09:25.502 14002.069 - 14107.348: 96.7039% ( 13) 00:09:25.502 14107.348 - 14212.627: 96.7634% ( 8) 00:09:25.502 14212.627 - 14317.905: 96.8527% ( 12) 00:09:25.502 14317.905 - 14423.184: 96.9196% ( 9) 00:09:25.502 14423.184 - 14528.463: 96.9940% ( 10) 00:09:25.502 14528.463 - 14633.741: 97.1057% ( 15) 00:09:25.502 14633.741 - 14739.020: 97.2024% ( 13) 00:09:25.502 14739.020 - 14844.299: 97.3586% ( 21) 00:09:25.502 14844.299 - 14949.578: 97.5298% ( 23) 00:09:25.502 14949.578 - 15054.856: 97.6637% ( 18) 00:09:25.502 15054.856 - 15160.135: 97.7679% ( 14) 00:09:25.502 15160.135 - 15265.414: 97.8795% ( 15) 00:09:25.502 15265.414 - 15370.692: 97.9985% ( 16) 00:09:25.502 15370.692 - 15475.971: 98.1101% ( 15) 00:09:25.502 15475.971 - 15581.250: 98.2217% ( 15) 00:09:25.502 15581.250 - 15686.529: 98.2887% ( 9) 00:09:25.502 15686.529 - 15791.807: 98.3482% ( 8) 00:09:25.502 15791.807 - 15897.086: 98.4077% ( 8) 00:09:25.502 15897.086 - 16002.365: 98.4598% ( 7) 00:09:25.502 16002.365 - 16107.643: 98.5119% ( 7) 00:09:25.502 16107.643 - 16212.922: 98.5565% ( 6) 00:09:25.502 16212.922 - 16318.201: 98.5714% ( 2) 00:09:25.502 17686.824 - 17792.103: 98.5789% ( 1) 00:09:25.502 17792.103 - 17897.382: 98.6310% ( 7) 00:09:25.502 17897.382 - 18002.660: 98.6756% ( 6) 00:09:25.502 18002.660 - 18107.939: 98.7277% ( 7) 00:09:25.502 18107.939 - 18213.218: 98.7798% ( 7) 00:09:25.502 18213.218 - 18318.496: 98.8244% ( 6) 00:09:25.502 18318.496 - 18423.775: 98.8765% ( 7) 00:09:25.502 18423.775 - 18529.054: 98.9286% ( 7) 00:09:25.502 18529.054 - 18634.333: 98.9881% ( 8) 00:09:25.502 18634.333 - 18739.611: 99.0402% ( 7) 00:09:25.502 18739.611 - 18844.890: 99.0476% ( 1) 00:09:25.502 31794.172 - 32004.729: 99.0625% ( 2) 00:09:25.502 32004.729 - 32215.287: 99.1071% ( 6) 00:09:25.502 32215.287 - 32425.844: 99.1592% ( 7) 00:09:25.502 32425.844 - 32636.402: 99.2113% ( 7) 00:09:25.502 32636.402 - 32846.959: 99.2634% ( 7) 00:09:25.502 32846.959 - 33057.516: 99.3080% ( 6) 00:09:25.502 33057.516 - 33268.074: 99.3527% ( 6) 00:09:25.502 33268.074 - 33478.631: 99.4048% ( 7) 00:09:25.502 33478.631 - 33689.189: 99.4494% ( 6) 00:09:25.502 33689.189 - 33899.746: 99.5015% ( 7) 00:09:25.502 33899.746 - 34110.304: 99.5238% ( 3) 00:09:25.502 38953.124 - 39163.682: 99.5387% ( 2) 00:09:25.502 39163.682 - 39374.239: 99.5908% ( 7) 00:09:25.502 39374.239 - 39584.797: 99.6429% ( 7) 00:09:25.502 39584.797 - 39795.354: 99.6875% ( 6) 00:09:25.502 39795.354 - 40005.912: 99.7396% ( 7) 00:09:25.502 40005.912 - 40216.469: 99.7991% ( 8) 00:09:25.502 40216.469 - 40427.027: 99.8512% ( 7) 00:09:25.502 40427.027 - 40637.584: 99.8958% ( 6) 00:09:25.502 40637.584 - 40848.141: 99.9256% ( 4) 00:09:25.502 40848.141 - 41058.699: 99.9777% ( 7) 00:09:25.502 41058.699 - 41269.256: 100.0000% ( 3) 00:09:25.502 00:09:25.502 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:25.502 ============================================================================== 00:09:25.502 Range in us Cumulative IO count 00:09:25.502 7948.543 - 8001.182: 0.0595% ( 8) 00:09:25.502 8001.182 - 8053.822: 0.3125% ( 34) 00:09:25.502 8053.822 - 8106.461: 0.8780% ( 76) 00:09:25.502 8106.461 - 8159.100: 1.6071% ( 98) 00:09:25.502 8159.100 - 8211.740: 2.6637% ( 142) 00:09:25.502 8211.740 - 8264.379: 4.1592% ( 201) 00:09:25.502 8264.379 - 8317.018: 6.1979% ( 274) 00:09:25.502 8317.018 - 8369.658: 8.5193% ( 312) 00:09:25.502 8369.658 - 8422.297: 11.0938% ( 346) 00:09:25.502 8422.297 - 8474.937: 14.0476% ( 397) 00:09:25.502 8474.937 - 8527.576: 17.0982% ( 410) 00:09:25.502 8527.576 - 8580.215: 20.3497% ( 437) 00:09:25.502 8580.215 - 8632.855: 23.9062% ( 478) 00:09:25.502 8632.855 - 8685.494: 27.8274% ( 527) 00:09:25.502 8685.494 - 8738.133: 32.0089% ( 562) 00:09:25.502 8738.133 - 8790.773: 36.2946% ( 576) 00:09:25.502 8790.773 - 8843.412: 40.7738% ( 602) 00:09:25.502 8843.412 - 8896.051: 45.2827% ( 606) 00:09:25.502 8896.051 - 8948.691: 49.7173% ( 596) 00:09:25.502 8948.691 - 9001.330: 54.0030% ( 576) 00:09:25.502 9001.330 - 9053.969: 57.9315% ( 528) 00:09:25.502 9053.969 - 9106.609: 61.5774% ( 490) 00:09:25.502 9106.609 - 9159.248: 65.0074% ( 461) 00:09:25.502 9159.248 - 9211.888: 68.1771% ( 426) 00:09:25.502 9211.888 - 9264.527: 70.9970% ( 379) 00:09:25.502 9264.527 - 9317.166: 73.6607% ( 358) 00:09:25.502 9317.166 - 9369.806: 76.1012% ( 328) 00:09:25.502 9369.806 - 9422.445: 78.3854% ( 307) 00:09:25.502 9422.445 - 9475.084: 80.5729% ( 294) 00:09:25.502 9475.084 - 9527.724: 82.3735% ( 242) 00:09:25.502 9527.724 - 9580.363: 84.0402% ( 224) 00:09:25.502 9580.363 - 9633.002: 85.3051% ( 170) 00:09:25.502 9633.002 - 9685.642: 86.4360% ( 152) 00:09:25.502 9685.642 - 9738.281: 87.3512% ( 123) 00:09:25.502 9738.281 - 9790.920: 88.1622% ( 109) 00:09:25.502 9790.920 - 9843.560: 88.8393% ( 91) 00:09:25.502 9843.560 - 9896.199: 89.3452% ( 68) 00:09:25.502 9896.199 - 9948.839: 89.8214% ( 64) 00:09:25.502 9948.839 - 10001.478: 90.2679% ( 60) 00:09:25.502 10001.478 - 10054.117: 90.6101% ( 46) 00:09:25.502 10054.117 - 10106.757: 90.9077% ( 40) 00:09:25.502 10106.757 - 10159.396: 91.1384% ( 31) 00:09:25.502 10159.396 - 10212.035: 91.4062% ( 36) 00:09:25.502 10212.035 - 10264.675: 91.5923% ( 25) 00:09:25.502 10264.675 - 10317.314: 91.7932% ( 27) 00:09:25.502 10317.314 - 10369.953: 91.9494% ( 21) 00:09:25.502 10369.953 - 10422.593: 92.0908% ( 19) 00:09:25.502 10422.593 - 10475.232: 92.2247% ( 18) 00:09:25.502 10475.232 - 10527.871: 92.3735% ( 20) 00:09:25.502 10527.871 - 10580.511: 92.5000% ( 17) 00:09:25.502 10580.511 - 10633.150: 92.5893% ( 12) 00:09:25.502 10633.150 - 10685.790: 92.6786% ( 12) 00:09:25.502 10685.790 - 10738.429: 92.7604% ( 11) 00:09:25.502 10738.429 - 10791.068: 92.8497% ( 12) 00:09:25.502 10791.068 - 10843.708: 92.9390% ( 12) 00:09:25.502 10843.708 - 10896.347: 93.0283% ( 12) 00:09:25.502 10896.347 - 10948.986: 93.1101% ( 11) 00:09:25.502 10948.986 - 11001.626: 93.1845% ( 10) 00:09:25.502 11001.626 - 11054.265: 93.2515% ( 9) 00:09:25.502 11054.265 - 11106.904: 93.3259% ( 10) 00:09:25.502 11106.904 - 11159.544: 93.4226% ( 13) 00:09:25.502 11159.544 - 11212.183: 93.5268% ( 14) 00:09:25.502 11212.183 - 11264.822: 93.6384% ( 15) 00:09:25.502 11264.822 - 11317.462: 93.7277% ( 12) 00:09:25.502 11317.462 - 11370.101: 93.8170% ( 12) 00:09:25.502 11370.101 - 11422.741: 93.9360% ( 16) 00:09:25.502 11422.741 - 11475.380: 94.0402% ( 14) 00:09:25.502 11475.380 - 11528.019: 94.1295% ( 12) 00:09:25.502 11528.019 - 11580.659: 94.2485% ( 16) 00:09:25.502 11580.659 - 11633.298: 94.3601% ( 15) 00:09:25.502 11633.298 - 11685.937: 94.4568% ( 13) 00:09:25.502 11685.937 - 11738.577: 94.5610% ( 14) 00:09:25.502 11738.577 - 11791.216: 94.6577% ( 13) 00:09:25.502 11791.216 - 11843.855: 94.7396% ( 11) 00:09:25.502 11843.855 - 11896.495: 94.8140% ( 10) 00:09:25.502 11896.495 - 11949.134: 94.8958% ( 11) 00:09:25.502 11949.134 - 12001.773: 94.9405% ( 6) 00:09:25.502 12001.773 - 12054.413: 94.9777% ( 5) 00:09:25.502 12054.413 - 12107.052: 95.0223% ( 6) 00:09:25.502 12107.052 - 12159.692: 95.0744% ( 7) 00:09:25.502 12159.692 - 12212.331: 95.1116% ( 5) 00:09:25.502 12212.331 - 12264.970: 95.1562% ( 6) 00:09:25.502 12264.970 - 12317.610: 95.1935% ( 5) 00:09:25.502 12317.610 - 12370.249: 95.2530% ( 8) 00:09:25.502 12370.249 - 12422.888: 95.3199% ( 9) 00:09:25.502 12422.888 - 12475.528: 95.3943% ( 10) 00:09:25.502 12475.528 - 12528.167: 95.4539% ( 8) 00:09:25.502 12528.167 - 12580.806: 95.5134% ( 8) 00:09:25.502 12580.806 - 12633.446: 95.5729% ( 8) 00:09:25.502 12633.446 - 12686.085: 95.6399% ( 9) 00:09:25.502 12686.085 - 12738.724: 95.6994% ( 8) 00:09:25.502 12738.724 - 12791.364: 95.7664% ( 9) 00:09:25.502 12791.364 - 12844.003: 95.8333% ( 9) 00:09:25.502 12844.003 - 12896.643: 95.8929% ( 8) 00:09:25.502 12896.643 - 12949.282: 95.9598% ( 9) 00:09:25.502 12949.282 - 13001.921: 96.0193% ( 8) 00:09:25.502 13001.921 - 13054.561: 96.0714% ( 7) 00:09:25.502 13054.561 - 13107.200: 96.1384% ( 9) 00:09:25.502 13107.200 - 13159.839: 96.1979% ( 8) 00:09:25.502 13159.839 - 13212.479: 96.2426% ( 6) 00:09:25.502 13212.479 - 13265.118: 96.2649% ( 3) 00:09:25.502 13265.118 - 13317.757: 96.3021% ( 5) 00:09:25.502 13317.757 - 13370.397: 96.3393% ( 5) 00:09:25.502 13370.397 - 13423.036: 96.3690% ( 4) 00:09:25.502 13423.036 - 13475.676: 96.4062% ( 5) 00:09:25.502 13475.676 - 13580.954: 96.4583% ( 7) 00:09:25.502 13580.954 - 13686.233: 96.4807% ( 3) 00:09:25.502 13686.233 - 13791.512: 96.5179% ( 5) 00:09:25.502 13791.512 - 13896.790: 96.5625% ( 6) 00:09:25.502 13896.790 - 14002.069: 96.6295% ( 9) 00:09:25.502 14002.069 - 14107.348: 96.7188% ( 12) 00:09:25.502 14107.348 - 14212.627: 96.8006% ( 11) 00:09:25.502 14212.627 - 14317.905: 96.8824% ( 11) 00:09:25.502 14317.905 - 14423.184: 96.9568% ( 10) 00:09:25.502 14423.184 - 14528.463: 97.0164% ( 8) 00:09:25.502 14528.463 - 14633.741: 97.0610% ( 6) 00:09:25.502 14633.741 - 14739.020: 97.1131% ( 7) 00:09:25.502 14739.020 - 14844.299: 97.1577% ( 6) 00:09:25.502 14844.299 - 14949.578: 97.2098% ( 7) 00:09:25.502 14949.578 - 15054.856: 97.2619% ( 7) 00:09:25.502 15054.856 - 15160.135: 97.3065% ( 6) 00:09:25.502 15160.135 - 15265.414: 97.3735% ( 9) 00:09:25.502 15265.414 - 15370.692: 97.4702% ( 13) 00:09:25.502 15370.692 - 15475.971: 97.5446% ( 10) 00:09:25.502 15475.971 - 15581.250: 97.6711% ( 17) 00:09:25.502 15581.250 - 15686.529: 97.8051% ( 18) 00:09:25.502 15686.529 - 15791.807: 97.9241% ( 16) 00:09:25.503 15791.807 - 15897.086: 98.0208% ( 13) 00:09:25.503 15897.086 - 16002.365: 98.1101% ( 12) 00:09:25.503 16002.365 - 16107.643: 98.1920% ( 11) 00:09:25.503 16107.643 - 16212.922: 98.2738% ( 11) 00:09:25.503 16212.922 - 16318.201: 98.3557% ( 11) 00:09:25.503 16318.201 - 16423.480: 98.4449% ( 12) 00:09:25.503 16423.480 - 16528.758: 98.4970% ( 7) 00:09:25.503 16528.758 - 16634.037: 98.5417% ( 6) 00:09:25.503 16634.037 - 16739.316: 98.6012% ( 8) 00:09:25.503 16739.316 - 16844.594: 98.6533% ( 7) 00:09:25.503 16844.594 - 16949.873: 98.7054% ( 7) 00:09:25.503 16949.873 - 17055.152: 98.7574% ( 7) 00:09:25.503 17055.152 - 17160.431: 98.8095% ( 7) 00:09:25.503 17160.431 - 17265.709: 98.8616% ( 7) 00:09:25.503 17265.709 - 17370.988: 98.9137% ( 7) 00:09:25.503 17370.988 - 17476.267: 98.9658% ( 7) 00:09:25.503 17476.267 - 17581.545: 99.0179% ( 7) 00:09:25.503 17581.545 - 17686.824: 99.0476% ( 4) 00:09:25.503 29899.155 - 30109.712: 99.0551% ( 1) 00:09:25.503 30109.712 - 30320.270: 99.0997% ( 6) 00:09:25.503 30320.270 - 30530.827: 99.1443% ( 6) 00:09:25.503 30530.827 - 30741.385: 99.1890% ( 6) 00:09:25.503 30741.385 - 30951.942: 99.2411% ( 7) 00:09:25.503 30951.942 - 31162.500: 99.2857% ( 6) 00:09:25.503 31162.500 - 31373.057: 99.3452% ( 8) 00:09:25.503 31373.057 - 31583.614: 99.3899% ( 6) 00:09:25.503 31583.614 - 31794.172: 99.4420% ( 7) 00:09:25.503 31794.172 - 32004.729: 99.4866% ( 6) 00:09:25.503 32004.729 - 32215.287: 99.5238% ( 5) 00:09:25.503 37268.665 - 37479.222: 99.5759% ( 7) 00:09:25.503 37479.222 - 37689.780: 99.6205% ( 6) 00:09:25.503 37689.780 - 37900.337: 99.6652% ( 6) 00:09:25.503 37900.337 - 38110.895: 99.7173% ( 7) 00:09:25.503 38110.895 - 38321.452: 99.7619% ( 6) 00:09:25.503 38321.452 - 38532.010: 99.8140% ( 7) 00:09:25.503 38532.010 - 38742.567: 99.8661% ( 7) 00:09:25.503 38742.567 - 38953.124: 99.9107% ( 6) 00:09:25.503 38953.124 - 39163.682: 99.9628% ( 7) 00:09:25.503 39163.682 - 39374.239: 100.0000% ( 5) 00:09:25.503 00:09:25.503 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:25.503 ============================================================================== 00:09:25.503 Range in us Cumulative IO count 00:09:25.503 7948.543 - 8001.182: 0.0595% ( 8) 00:09:25.503 8001.182 - 8053.822: 0.2307% ( 23) 00:09:25.503 8053.822 - 8106.461: 0.7366% ( 68) 00:09:25.503 8106.461 - 8159.100: 1.5253% ( 106) 00:09:25.503 8159.100 - 8211.740: 2.6414% ( 150) 00:09:25.503 8211.740 - 8264.379: 4.2411% ( 215) 00:09:25.503 8264.379 - 8317.018: 6.1756% ( 260) 00:09:25.503 8317.018 - 8369.658: 8.5193% ( 315) 00:09:25.503 8369.658 - 8422.297: 11.1012% ( 347) 00:09:25.503 8422.297 - 8474.937: 13.8467% ( 369) 00:09:25.503 8474.937 - 8527.576: 16.9568% ( 418) 00:09:25.503 8527.576 - 8580.215: 20.2009% ( 436) 00:09:25.503 8580.215 - 8632.855: 23.7426% ( 476) 00:09:25.503 8632.855 - 8685.494: 27.6562% ( 526) 00:09:25.503 8685.494 - 8738.133: 31.7411% ( 549) 00:09:25.503 8738.133 - 8790.773: 36.0789% ( 583) 00:09:25.503 8790.773 - 8843.412: 40.6994% ( 621) 00:09:25.503 8843.412 - 8896.051: 45.1935% ( 604) 00:09:25.503 8896.051 - 8948.691: 49.6726% ( 602) 00:09:25.503 8948.691 - 9001.330: 54.0327% ( 586) 00:09:25.503 9001.330 - 9053.969: 58.0729% ( 543) 00:09:25.503 9053.969 - 9106.609: 61.8006% ( 501) 00:09:25.503 9106.609 - 9159.248: 65.0893% ( 442) 00:09:25.503 9159.248 - 9211.888: 68.1920% ( 417) 00:09:25.503 9211.888 - 9264.527: 71.1682% ( 400) 00:09:25.503 9264.527 - 9317.166: 73.8244% ( 357) 00:09:25.503 9317.166 - 9369.806: 76.3542% ( 340) 00:09:25.503 9369.806 - 9422.445: 78.5938% ( 301) 00:09:25.503 9422.445 - 9475.084: 80.5878% ( 268) 00:09:25.503 9475.084 - 9527.724: 82.4107% ( 245) 00:09:25.503 9527.724 - 9580.363: 83.9881% ( 212) 00:09:25.503 9580.363 - 9633.002: 85.3720% ( 186) 00:09:25.503 9633.002 - 9685.642: 86.4583% ( 146) 00:09:25.503 9685.642 - 9738.281: 87.3810% ( 124) 00:09:25.503 9738.281 - 9790.920: 88.2440% ( 116) 00:09:25.503 9790.920 - 9843.560: 88.8170% ( 77) 00:09:25.503 9843.560 - 9896.199: 89.3229% ( 68) 00:09:25.503 9896.199 - 9948.839: 89.7693% ( 60) 00:09:25.503 9948.839 - 10001.478: 90.1637% ( 53) 00:09:25.503 10001.478 - 10054.117: 90.5060% ( 46) 00:09:25.503 10054.117 - 10106.757: 90.7887% ( 38) 00:09:25.503 10106.757 - 10159.396: 90.9970% ( 28) 00:09:25.503 10159.396 - 10212.035: 91.1682% ( 23) 00:09:25.503 10212.035 - 10264.675: 91.3244% ( 21) 00:09:25.503 10264.675 - 10317.314: 91.4583% ( 18) 00:09:25.503 10317.314 - 10369.953: 91.5923% ( 18) 00:09:25.503 10369.953 - 10422.593: 91.7485% ( 21) 00:09:25.503 10422.593 - 10475.232: 91.8973% ( 20) 00:09:25.503 10475.232 - 10527.871: 92.0164% ( 16) 00:09:25.503 10527.871 - 10580.511: 92.1577% ( 19) 00:09:25.503 10580.511 - 10633.150: 92.2842% ( 17) 00:09:25.503 10633.150 - 10685.790: 92.4330% ( 20) 00:09:25.503 10685.790 - 10738.429: 92.5670% ( 18) 00:09:25.503 10738.429 - 10791.068: 92.6637% ( 13) 00:09:25.503 10791.068 - 10843.708: 92.7902% ( 17) 00:09:25.503 10843.708 - 10896.347: 92.9092% ( 16) 00:09:25.503 10896.347 - 10948.986: 93.0432% ( 18) 00:09:25.503 10948.986 - 11001.626: 93.1771% ( 18) 00:09:25.503 11001.626 - 11054.265: 93.3185% ( 19) 00:09:25.503 11054.265 - 11106.904: 93.4375% ( 16) 00:09:25.503 11106.904 - 11159.544: 93.5417% ( 14) 00:09:25.503 11159.544 - 11212.183: 93.6607% ( 16) 00:09:25.503 11212.183 - 11264.822: 93.7723% ( 15) 00:09:25.503 11264.822 - 11317.462: 93.8839% ( 15) 00:09:25.503 11317.462 - 11370.101: 93.9509% ( 9) 00:09:25.503 11370.101 - 11422.741: 94.0179% ( 9) 00:09:25.503 11422.741 - 11475.380: 94.0774% ( 8) 00:09:25.503 11475.380 - 11528.019: 94.1667% ( 12) 00:09:25.503 11528.019 - 11580.659: 94.2336% ( 9) 00:09:25.503 11580.659 - 11633.298: 94.3006% ( 9) 00:09:25.503 11633.298 - 11685.937: 94.3601% ( 8) 00:09:25.503 11685.937 - 11738.577: 94.4122% ( 7) 00:09:25.503 11738.577 - 11791.216: 94.4643% ( 7) 00:09:25.503 11791.216 - 11843.855: 94.5089% ( 6) 00:09:25.503 11843.855 - 11896.495: 94.5461% ( 5) 00:09:25.503 11896.495 - 11949.134: 94.5908% ( 6) 00:09:25.503 11949.134 - 12001.773: 94.6503% ( 8) 00:09:25.503 12001.773 - 12054.413: 94.7098% ( 8) 00:09:25.503 12054.413 - 12107.052: 94.7693% ( 8) 00:09:25.503 12107.052 - 12159.692: 94.8438% ( 10) 00:09:25.503 12159.692 - 12212.331: 94.9256% ( 11) 00:09:25.503 12212.331 - 12264.970: 95.0223% ( 13) 00:09:25.503 12264.970 - 12317.610: 95.1116% ( 12) 00:09:25.503 12317.610 - 12370.249: 95.2083% ( 13) 00:09:25.503 12370.249 - 12422.888: 95.3199% ( 15) 00:09:25.503 12422.888 - 12475.528: 95.4018% ( 11) 00:09:25.503 12475.528 - 12528.167: 95.4911% ( 12) 00:09:25.503 12528.167 - 12580.806: 95.5729% ( 11) 00:09:25.503 12580.806 - 12633.446: 95.6324% ( 8) 00:09:25.503 12633.446 - 12686.085: 95.6994% ( 9) 00:09:25.503 12686.085 - 12738.724: 95.7589% ( 8) 00:09:25.503 12738.724 - 12791.364: 95.8333% ( 10) 00:09:25.503 12791.364 - 12844.003: 95.9003% ( 9) 00:09:25.503 12844.003 - 12896.643: 95.9673% ( 9) 00:09:25.503 12896.643 - 12949.282: 96.0045% ( 5) 00:09:25.503 12949.282 - 13001.921: 96.0491% ( 6) 00:09:25.503 13001.921 - 13054.561: 96.0938% ( 6) 00:09:25.503 13054.561 - 13107.200: 96.1384% ( 6) 00:09:25.503 13107.200 - 13159.839: 96.1905% ( 7) 00:09:25.503 13159.839 - 13212.479: 96.2426% ( 7) 00:09:25.503 13212.479 - 13265.118: 96.2798% ( 5) 00:09:25.503 13265.118 - 13317.757: 96.3318% ( 7) 00:09:25.503 13317.757 - 13370.397: 96.3765% ( 6) 00:09:25.503 13370.397 - 13423.036: 96.4062% ( 4) 00:09:25.503 13423.036 - 13475.676: 96.4583% ( 7) 00:09:25.503 13475.676 - 13580.954: 96.5402% ( 11) 00:09:25.503 13580.954 - 13686.233: 96.6146% ( 10) 00:09:25.503 13686.233 - 13791.512: 96.7039% ( 12) 00:09:25.503 13791.512 - 13896.790: 96.7857% ( 11) 00:09:25.503 13896.790 - 14002.069: 96.8304% ( 6) 00:09:25.503 14002.069 - 14107.348: 96.8527% ( 3) 00:09:25.503 14107.348 - 14212.627: 96.8750% ( 3) 00:09:25.503 14212.627 - 14317.905: 96.9048% ( 4) 00:09:25.503 14317.905 - 14423.184: 96.9271% ( 3) 00:09:25.503 14423.184 - 14528.463: 96.9568% ( 4) 00:09:25.503 14528.463 - 14633.741: 96.9792% ( 3) 00:09:25.503 14633.741 - 14739.020: 97.0312% ( 7) 00:09:25.503 14739.020 - 14844.299: 97.1205% ( 12) 00:09:25.503 14844.299 - 14949.578: 97.1875% ( 9) 00:09:25.503 14949.578 - 15054.856: 97.2693% ( 11) 00:09:25.503 15054.856 - 15160.135: 97.3586% ( 12) 00:09:25.503 15160.135 - 15265.414: 97.4330% ( 10) 00:09:25.503 15265.414 - 15370.692: 97.5074% ( 10) 00:09:25.503 15370.692 - 15475.971: 97.5744% ( 9) 00:09:25.503 15475.971 - 15581.250: 97.6265% ( 7) 00:09:25.503 15581.250 - 15686.529: 97.6786% ( 7) 00:09:25.503 15686.529 - 15791.807: 97.7307% ( 7) 00:09:25.503 15791.807 - 15897.086: 97.8274% ( 13) 00:09:25.503 15897.086 - 16002.365: 97.9315% ( 14) 00:09:25.503 16002.365 - 16107.643: 98.0208% ( 12) 00:09:25.503 16107.643 - 16212.922: 98.1399% ( 16) 00:09:25.503 16212.922 - 16318.201: 98.2664% ( 17) 00:09:25.503 16318.201 - 16423.480: 98.4152% ( 20) 00:09:25.503 16423.480 - 16528.758: 98.5342% ( 16) 00:09:25.503 16528.758 - 16634.037: 98.6533% ( 16) 00:09:25.503 16634.037 - 16739.316: 98.7723% ( 16) 00:09:25.503 16739.316 - 16844.594: 98.8467% ( 10) 00:09:25.503 16844.594 - 16949.873: 98.8988% ( 7) 00:09:25.503 16949.873 - 17055.152: 98.9583% ( 8) 00:09:25.503 17055.152 - 17160.431: 99.0030% ( 6) 00:09:25.503 17160.431 - 17265.709: 99.0476% ( 6) 00:09:25.503 28214.696 - 28425.253: 99.0551% ( 1) 00:09:25.503 28425.253 - 28635.810: 99.0997% ( 6) 00:09:25.503 28635.810 - 28846.368: 99.1518% ( 7) 00:09:25.503 28846.368 - 29056.925: 99.2039% ( 7) 00:09:25.503 29056.925 - 29267.483: 99.2560% ( 7) 00:09:25.503 29267.483 - 29478.040: 99.3080% ( 7) 00:09:25.503 29478.040 - 29688.598: 99.3527% ( 6) 00:09:25.503 29688.598 - 29899.155: 99.4122% ( 8) 00:09:25.503 29899.155 - 30109.712: 99.4568% ( 6) 00:09:25.503 30109.712 - 30320.270: 99.5015% ( 6) 00:09:25.503 30320.270 - 30530.827: 99.5238% ( 3) 00:09:25.503 35373.648 - 35584.206: 99.5536% ( 4) 00:09:25.503 35584.206 - 35794.763: 99.6057% ( 7) 00:09:25.503 35794.763 - 36005.320: 99.6577% ( 7) 00:09:25.503 36005.320 - 36215.878: 99.7098% ( 7) 00:09:25.503 36215.878 - 36426.435: 99.7619% ( 7) 00:09:25.503 36426.435 - 36636.993: 99.8140% ( 7) 00:09:25.503 36636.993 - 36847.550: 99.8586% ( 6) 00:09:25.503 36847.550 - 37058.108: 99.9107% ( 7) 00:09:25.503 37058.108 - 37268.665: 99.9628% ( 7) 00:09:25.503 37268.665 - 37479.222: 100.0000% ( 5) 00:09:25.503 00:09:25.503 04:33:14 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:26.881 Initializing NVMe Controllers 00:09:26.881 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:26.881 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:26.881 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:26.881 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:26.881 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:26.881 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:26.881 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:26.881 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:26.881 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:26.881 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:26.881 Initialization complete. Launching workers. 00:09:26.881 ======================================================== 00:09:26.881 Latency(us) 00:09:26.881 Device Information : IOPS MiB/s Average min max 00:09:26.881 PCIE (0000:00:10.0) NSID 1 from core 0: 9379.05 109.91 13682.59 8550.24 45618.15 00:09:26.881 PCIE (0000:00:11.0) NSID 1 from core 0: 9379.05 109.91 13662.15 8803.90 44094.34 00:09:26.881 PCIE (0000:00:13.0) NSID 1 from core 0: 9379.05 109.91 13642.31 8767.03 43493.76 00:09:26.881 PCIE (0000:00:12.0) NSID 1 from core 0: 9379.05 109.91 13623.15 8849.56 42129.65 00:09:26.881 PCIE (0000:00:12.0) NSID 2 from core 0: 9379.05 109.91 13603.36 8638.71 40745.28 00:09:26.881 PCIE (0000:00:12.0) NSID 3 from core 0: 9442.85 110.66 13490.96 8852.32 31122.28 00:09:26.881 ======================================================== 00:09:26.881 Total : 56338.08 660.21 13617.28 8550.24 45618.15 00:09:26.881 00:09:26.881 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:26.881 ================================================================================= 00:09:26.881 1.00000% : 9106.609us 00:09:26.881 10.00000% : 10422.593us 00:09:26.881 25.00000% : 11106.904us 00:09:26.881 50.00000% : 12475.528us 00:09:26.881 75.00000% : 15791.807us 00:09:26.881 90.00000% : 17897.382us 00:09:26.881 95.00000% : 18844.890us 00:09:26.881 98.00000% : 21161.022us 00:09:26.881 99.00000% : 34320.861us 00:09:26.881 99.50000% : 43585.388us 00:09:26.881 99.90000% : 45269.847us 00:09:26.881 99.99000% : 45690.962us 00:09:26.881 99.99900% : 45690.962us 00:09:26.881 99.99990% : 45690.962us 00:09:26.881 99.99999% : 45690.962us 00:09:26.881 00:09:26.881 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:26.881 ================================================================================= 00:09:26.881 1.00000% : 9106.609us 00:09:26.881 10.00000% : 10475.232us 00:09:26.881 25.00000% : 11159.544us 00:09:26.881 50.00000% : 12528.167us 00:09:26.881 75.00000% : 15791.807us 00:09:26.881 90.00000% : 18002.660us 00:09:26.881 95.00000% : 19055.447us 00:09:26.881 98.00000% : 21266.300us 00:09:26.881 99.00000% : 33478.631us 00:09:26.881 99.50000% : 42532.601us 00:09:26.881 99.90000% : 43795.945us 00:09:26.881 99.99000% : 44217.060us 00:09:26.881 99.99900% : 44217.060us 00:09:26.881 99.99990% : 44217.060us 00:09:26.881 99.99999% : 44217.060us 00:09:26.881 00:09:26.881 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:26.881 ================================================================================= 00:09:26.881 1.00000% : 9159.248us 00:09:26.881 10.00000% : 10527.871us 00:09:26.881 25.00000% : 11159.544us 00:09:26.881 50.00000% : 12580.806us 00:09:26.881 75.00000% : 15581.250us 00:09:26.881 90.00000% : 17897.382us 00:09:26.881 95.00000% : 19160.726us 00:09:26.881 98.00000% : 20002.956us 00:09:26.881 99.00000% : 32846.959us 00:09:26.881 99.50000% : 41900.929us 00:09:26.881 99.90000% : 43164.273us 00:09:26.881 99.99000% : 43585.388us 00:09:26.881 99.99900% : 43585.388us 00:09:26.881 99.99990% : 43585.388us 00:09:26.881 99.99999% : 43585.388us 00:09:26.881 00:09:26.881 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:26.881 ================================================================================= 00:09:26.881 1.00000% : 9106.609us 00:09:26.881 10.00000% : 10422.593us 00:09:26.881 25.00000% : 11159.544us 00:09:26.881 50.00000% : 12580.806us 00:09:26.881 75.00000% : 15475.971us 00:09:26.881 90.00000% : 17897.382us 00:09:26.881 95.00000% : 19160.726us 00:09:26.881 98.00000% : 20002.956us 00:09:26.881 99.00000% : 31373.057us 00:09:26.881 99.50000% : 40427.027us 00:09:26.881 99.90000% : 41900.929us 00:09:26.881 99.99000% : 42322.043us 00:09:26.881 99.99900% : 42322.043us 00:09:26.881 99.99990% : 42322.043us 00:09:26.881 99.99999% : 42322.043us 00:09:26.881 00:09:26.881 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:26.881 ================================================================================= 00:09:26.881 1.00000% : 9159.248us 00:09:26.881 10.00000% : 10422.593us 00:09:26.881 25.00000% : 11159.544us 00:09:26.881 50.00000% : 12580.806us 00:09:26.881 75.00000% : 15475.971us 00:09:26.881 90.00000% : 18002.660us 00:09:26.881 95.00000% : 18634.333us 00:09:26.881 98.00000% : 20529.349us 00:09:26.881 99.00000% : 30109.712us 00:09:26.881 99.50000% : 39163.682us 00:09:26.881 99.90000% : 40427.027us 00:09:26.881 99.99000% : 40848.141us 00:09:26.881 99.99900% : 40848.141us 00:09:26.881 99.99990% : 40848.141us 00:09:26.881 99.99999% : 40848.141us 00:09:26.881 00:09:26.881 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:26.881 ================================================================================= 00:09:26.881 1.00000% : 9211.888us 00:09:26.881 10.00000% : 10475.232us 00:09:26.881 25.00000% : 11159.544us 00:09:26.881 50.00000% : 12528.167us 00:09:26.881 75.00000% : 15791.807us 00:09:26.881 90.00000% : 17897.382us 00:09:26.881 95.00000% : 18739.611us 00:09:26.881 98.00000% : 20845.186us 00:09:26.881 99.00000% : 21371.579us 00:09:26.881 99.50000% : 29478.040us 00:09:26.881 99.90000% : 30951.942us 00:09:26.882 99.99000% : 31162.500us 00:09:26.882 99.99900% : 31162.500us 00:09:26.882 99.99990% : 31162.500us 00:09:26.882 99.99999% : 31162.500us 00:09:26.882 00:09:26.882 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:26.882 ============================================================================== 00:09:26.882 Range in us Cumulative IO count 00:09:26.882 8527.576 - 8580.215: 0.0319% ( 3) 00:09:26.882 8580.215 - 8632.855: 0.0531% ( 2) 00:09:26.882 8632.855 - 8685.494: 0.0850% ( 3) 00:09:26.882 8685.494 - 8738.133: 0.1063% ( 2) 00:09:26.882 8738.133 - 8790.773: 0.1594% ( 5) 00:09:26.882 8790.773 - 8843.412: 0.2445% ( 8) 00:09:26.882 8843.412 - 8896.051: 0.5421% ( 28) 00:09:26.882 8896.051 - 8948.691: 0.7334% ( 18) 00:09:26.882 8948.691 - 9001.330: 0.8503% ( 11) 00:09:26.882 9001.330 - 9053.969: 0.9673% ( 11) 00:09:26.882 9053.969 - 9106.609: 1.1692% ( 19) 00:09:26.882 9106.609 - 9159.248: 1.2755% ( 10) 00:09:26.882 9159.248 - 9211.888: 1.5094% ( 22) 00:09:26.882 9211.888 - 9264.527: 1.9664% ( 43) 00:09:26.882 9264.527 - 9317.166: 2.1684% ( 19) 00:09:26.882 9317.166 - 9369.806: 2.5404% ( 35) 00:09:26.882 9369.806 - 9422.445: 2.8805% ( 32) 00:09:26.882 9422.445 - 9475.084: 3.0931% ( 20) 00:09:26.882 9475.084 - 9527.724: 3.3376% ( 23) 00:09:26.882 9527.724 - 9580.363: 3.7734% ( 41) 00:09:26.882 9580.363 - 9633.002: 4.1986% ( 40) 00:09:26.882 9633.002 - 9685.642: 4.5493% ( 33) 00:09:26.882 9685.642 - 9738.281: 4.8363% ( 27) 00:09:26.882 9738.281 - 9790.920: 5.2083% ( 35) 00:09:26.882 9790.920 - 9843.560: 5.5697% ( 34) 00:09:26.882 9843.560 - 9896.199: 5.8142% ( 23) 00:09:26.882 9896.199 - 9948.839: 6.1224% ( 29) 00:09:26.882 9948.839 - 10001.478: 6.5582% ( 41) 00:09:26.882 10001.478 - 10054.117: 6.9090% ( 33) 00:09:26.882 10054.117 - 10106.757: 7.1960% ( 27) 00:09:26.882 10106.757 - 10159.396: 7.5999% ( 38) 00:09:26.882 10159.396 - 10212.035: 7.9719% ( 35) 00:09:26.882 10212.035 - 10264.675: 8.5247% ( 52) 00:09:26.882 10264.675 - 10317.314: 9.0880% ( 53) 00:09:26.882 10317.314 - 10369.953: 9.6726% ( 55) 00:09:26.882 10369.953 - 10422.593: 10.1297% ( 43) 00:09:26.882 10422.593 - 10475.232: 10.7143% ( 55) 00:09:26.882 10475.232 - 10527.871: 11.3627% ( 61) 00:09:26.882 10527.871 - 10580.511: 12.1811% ( 77) 00:09:26.882 10580.511 - 10633.150: 13.1059% ( 87) 00:09:26.882 10633.150 - 10685.790: 14.3495% ( 117) 00:09:26.882 10685.790 - 10738.429: 15.4868% ( 107) 00:09:26.882 10738.429 - 10791.068: 16.6986% ( 114) 00:09:26.882 10791.068 - 10843.708: 17.9422% ( 117) 00:09:26.882 10843.708 - 10896.347: 19.2921% ( 127) 00:09:26.882 10896.347 - 10948.986: 20.8333% ( 145) 00:09:26.882 10948.986 - 11001.626: 22.3427% ( 142) 00:09:26.882 11001.626 - 11054.265: 23.8627% ( 143) 00:09:26.882 11054.265 - 11106.904: 25.5527% ( 159) 00:09:26.882 11106.904 - 11159.544: 26.9983% ( 136) 00:09:26.882 11159.544 - 11212.183: 28.3376% ( 126) 00:09:26.882 11212.183 - 11264.822: 29.8044% ( 138) 00:09:26.882 11264.822 - 11317.462: 30.9418% ( 107) 00:09:26.882 11317.462 - 11370.101: 32.0897% ( 108) 00:09:26.882 11370.101 - 11422.741: 33.0357% ( 89) 00:09:26.882 11422.741 - 11475.380: 33.9711% ( 88) 00:09:26.882 11475.380 - 11528.019: 35.0234% ( 99) 00:09:26.882 11528.019 - 11580.659: 36.2670% ( 117) 00:09:26.882 11580.659 - 11633.298: 37.3193% ( 99) 00:09:26.882 11633.298 - 11685.937: 38.2759% ( 90) 00:09:26.882 11685.937 - 11738.577: 39.3707% ( 103) 00:09:26.882 11738.577 - 11791.216: 40.2849% ( 86) 00:09:26.882 11791.216 - 11843.855: 41.1246% ( 79) 00:09:26.882 11843.855 - 11896.495: 41.8793% ( 71) 00:09:26.882 11896.495 - 11949.134: 42.8997% ( 96) 00:09:26.882 11949.134 - 12001.773: 43.6118% ( 67) 00:09:26.882 12001.773 - 12054.413: 44.3771% ( 72) 00:09:26.882 12054.413 - 12107.052: 44.9830% ( 57) 00:09:26.882 12107.052 - 12159.692: 45.8865% ( 85) 00:09:26.882 12159.692 - 12212.331: 46.9069% ( 96) 00:09:26.882 12212.331 - 12264.970: 47.6828% ( 73) 00:09:26.882 12264.970 - 12317.610: 48.2568% ( 54) 00:09:26.882 12317.610 - 12370.249: 48.9690% ( 67) 00:09:26.882 12370.249 - 12422.888: 49.4579% ( 46) 00:09:26.882 12422.888 - 12475.528: 50.0000% ( 51) 00:09:26.882 12475.528 - 12528.167: 50.5527% ( 52) 00:09:26.882 12528.167 - 12580.806: 51.1586% ( 57) 00:09:26.882 12580.806 - 12633.446: 51.7645% ( 57) 00:09:26.882 12633.446 - 12686.085: 52.3384% ( 54) 00:09:26.882 12686.085 - 12738.724: 52.8274% ( 46) 00:09:26.882 12738.724 - 12791.364: 53.5502% ( 68) 00:09:26.882 12791.364 - 12844.003: 54.2304% ( 64) 00:09:26.882 12844.003 - 12896.643: 54.6981% ( 44) 00:09:26.882 12896.643 - 12949.282: 55.0914% ( 37) 00:09:26.882 12949.282 - 13001.921: 55.7292% ( 60) 00:09:26.882 13001.921 - 13054.561: 56.5582% ( 78) 00:09:26.882 13054.561 - 13107.200: 57.3342% ( 73) 00:09:26.882 13107.200 - 13159.839: 57.8444% ( 48) 00:09:26.882 13159.839 - 13212.479: 58.2802% ( 41) 00:09:26.882 13212.479 - 13265.118: 58.6522% ( 35) 00:09:26.882 13265.118 - 13317.757: 59.0774% ( 40) 00:09:26.882 13317.757 - 13370.397: 59.3325% ( 24) 00:09:26.882 13370.397 - 13423.036: 59.6939% ( 34) 00:09:26.882 13423.036 - 13475.676: 59.9702% ( 26) 00:09:26.882 13475.676 - 13580.954: 60.8312% ( 81) 00:09:26.882 13580.954 - 13686.233: 61.5434% ( 67) 00:09:26.882 13686.233 - 13791.512: 62.2555% ( 67) 00:09:26.882 13791.512 - 13896.790: 62.9677% ( 67) 00:09:26.882 13896.790 - 14002.069: 63.7330% ( 72) 00:09:26.882 14002.069 - 14107.348: 64.5514% ( 77) 00:09:26.882 14107.348 - 14212.627: 65.3912% ( 79) 00:09:26.882 14212.627 - 14317.905: 66.1246% ( 69) 00:09:26.882 14317.905 - 14423.184: 66.9218% ( 75) 00:09:26.882 14423.184 - 14528.463: 67.7190% ( 75) 00:09:26.882 14528.463 - 14633.741: 68.5162% ( 75) 00:09:26.882 14633.741 - 14739.020: 69.3452% ( 78) 00:09:26.882 14739.020 - 14844.299: 70.0043% ( 62) 00:09:26.882 14844.299 - 14949.578: 70.5889% ( 55) 00:09:26.882 14949.578 - 15054.856: 71.1841% ( 56) 00:09:26.882 15054.856 - 15160.135: 71.7156% ( 50) 00:09:26.882 15160.135 - 15265.414: 72.2577% ( 51) 00:09:26.882 15265.414 - 15370.692: 72.7360% ( 45) 00:09:26.882 15370.692 - 15475.971: 73.3099% ( 54) 00:09:26.882 15475.971 - 15581.250: 74.0646% ( 71) 00:09:26.882 15581.250 - 15686.529: 74.8406% ( 73) 00:09:26.882 15686.529 - 15791.807: 75.7759% ( 88) 00:09:26.882 15791.807 - 15897.086: 76.4987% ( 68) 00:09:26.882 15897.086 - 16002.365: 77.0833% ( 55) 00:09:26.882 16002.365 - 16107.643: 77.4872% ( 38) 00:09:26.882 16107.643 - 16212.922: 78.0081% ( 49) 00:09:26.882 16212.922 - 16318.201: 78.6990% ( 65) 00:09:26.882 16318.201 - 16423.480: 79.4324% ( 69) 00:09:26.882 16423.480 - 16528.758: 80.4422% ( 95) 00:09:26.882 16528.758 - 16634.037: 81.0374% ( 56) 00:09:26.882 16634.037 - 16739.316: 81.6433% ( 57) 00:09:26.882 16739.316 - 16844.594: 82.3873% ( 70) 00:09:26.882 16844.594 - 16949.873: 83.4715% ( 102) 00:09:26.882 16949.873 - 17055.152: 84.4069% ( 88) 00:09:26.882 17055.152 - 17160.431: 85.2785% ( 82) 00:09:26.882 17160.431 - 17265.709: 86.1607% ( 83) 00:09:26.882 17265.709 - 17370.988: 86.7878% ( 59) 00:09:26.882 17370.988 - 17476.267: 87.3618% ( 54) 00:09:26.882 17476.267 - 17581.545: 87.9039% ( 51) 00:09:26.882 17581.545 - 17686.824: 88.7011% ( 75) 00:09:26.882 17686.824 - 17792.103: 89.4133% ( 67) 00:09:26.882 17792.103 - 17897.382: 90.1573% ( 70) 00:09:26.882 17897.382 - 18002.660: 90.8588% ( 66) 00:09:26.882 18002.660 - 18107.939: 91.5179% ( 62) 00:09:26.882 18107.939 - 18213.218: 92.1131% ( 56) 00:09:26.882 18213.218 - 18318.496: 92.7934% ( 64) 00:09:26.882 18318.496 - 18423.775: 93.5162% ( 68) 00:09:26.882 18423.775 - 18529.054: 94.2071% ( 65) 00:09:26.882 18529.054 - 18634.333: 94.4728% ( 25) 00:09:26.882 18634.333 - 18739.611: 94.7491% ( 26) 00:09:26.882 18739.611 - 18844.890: 95.0043% ( 24) 00:09:26.882 18844.890 - 18950.169: 95.4719% ( 44) 00:09:26.882 18950.169 - 19055.447: 95.8014% ( 31) 00:09:26.882 19055.447 - 19160.726: 96.0884% ( 27) 00:09:26.882 19160.726 - 19266.005: 96.3010% ( 20) 00:09:26.882 19266.005 - 19371.284: 96.5136% ( 20) 00:09:26.882 19371.284 - 19476.562: 96.7156% ( 19) 00:09:26.882 19476.562 - 19581.841: 96.8856% ( 16) 00:09:26.882 19581.841 - 19687.120: 96.9813% ( 9) 00:09:26.882 19687.120 - 19792.398: 97.0557% ( 7) 00:09:26.882 19792.398 - 19897.677: 97.1726% ( 11) 00:09:26.882 19897.677 - 20002.956: 97.2683% ( 9) 00:09:26.882 20002.956 - 20108.235: 97.3321% ( 6) 00:09:26.882 20108.235 - 20213.513: 97.4171% ( 8) 00:09:26.882 20213.513 - 20318.792: 97.4490% ( 3) 00:09:26.882 20318.792 - 20424.071: 97.4809% ( 3) 00:09:26.882 20424.071 - 20529.349: 97.5128% ( 3) 00:09:26.882 20529.349 - 20634.628: 97.5446% ( 3) 00:09:26.882 20634.628 - 20739.907: 97.5872% ( 4) 00:09:26.883 20739.907 - 20845.186: 97.6190% ( 3) 00:09:26.883 20845.186 - 20950.464: 97.6722% ( 5) 00:09:26.883 20950.464 - 21055.743: 97.8210% ( 14) 00:09:26.883 21055.743 - 21161.022: 98.0442% ( 21) 00:09:26.883 21161.022 - 21266.300: 98.1186% ( 7) 00:09:26.883 21266.300 - 21371.579: 98.1718% ( 5) 00:09:26.883 21371.579 - 21476.858: 98.2355% ( 6) 00:09:26.883 21476.858 - 21582.137: 98.2887% ( 5) 00:09:26.883 21582.137 - 21687.415: 98.3737% ( 8) 00:09:26.883 21687.415 - 21792.694: 98.4588% ( 8) 00:09:26.883 21792.694 - 21897.973: 98.5332% ( 7) 00:09:26.883 21897.973 - 22003.251: 98.6076% ( 7) 00:09:26.883 22003.251 - 22108.530: 98.6395% ( 3) 00:09:26.883 33268.074 - 33478.631: 98.7245% ( 8) 00:09:26.883 33478.631 - 33689.189: 98.7989% ( 7) 00:09:26.883 33689.189 - 33899.746: 98.8733% ( 7) 00:09:26.883 33899.746 - 34110.304: 98.9583% ( 8) 00:09:26.883 34110.304 - 34320.861: 99.0221% ( 6) 00:09:26.883 34320.861 - 34531.418: 99.1178% ( 9) 00:09:26.883 34531.418 - 34741.976: 99.1815% ( 6) 00:09:26.883 34741.976 - 34952.533: 99.2772% ( 9) 00:09:26.883 34952.533 - 35163.091: 99.3197% ( 4) 00:09:26.883 42532.601 - 42743.158: 99.3304% ( 1) 00:09:26.883 42743.158 - 42953.716: 99.3941% ( 6) 00:09:26.883 42953.716 - 43164.273: 99.4366% ( 4) 00:09:26.883 43164.273 - 43374.831: 99.4898% ( 5) 00:09:26.883 43374.831 - 43585.388: 99.5217% ( 3) 00:09:26.883 43585.388 - 43795.945: 99.5748% ( 5) 00:09:26.883 43795.945 - 44006.503: 99.6173% ( 4) 00:09:26.883 44006.503 - 44217.060: 99.6705% ( 5) 00:09:26.883 44217.060 - 44427.618: 99.7236% ( 5) 00:09:26.883 44427.618 - 44638.175: 99.7662% ( 4) 00:09:26.883 44638.175 - 44848.733: 99.8193% ( 5) 00:09:26.883 44848.733 - 45059.290: 99.8618% ( 4) 00:09:26.883 45059.290 - 45269.847: 99.9150% ( 5) 00:09:26.883 45269.847 - 45480.405: 99.9681% ( 5) 00:09:26.883 45480.405 - 45690.962: 100.0000% ( 3) 00:09:26.883 00:09:26.883 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:26.883 ============================================================================== 00:09:26.883 Range in us Cumulative IO count 00:09:26.883 8790.773 - 8843.412: 0.0213% ( 2) 00:09:26.883 8843.412 - 8896.051: 0.1063% ( 8) 00:09:26.883 8896.051 - 8948.691: 0.2551% ( 14) 00:09:26.883 8948.691 - 9001.330: 0.4252% ( 16) 00:09:26.883 9001.330 - 9053.969: 0.7653% ( 32) 00:09:26.883 9053.969 - 9106.609: 1.1905% ( 40) 00:09:26.883 9106.609 - 9159.248: 1.5519% ( 34) 00:09:26.883 9159.248 - 9211.888: 1.9026% ( 33) 00:09:26.883 9211.888 - 9264.527: 2.2215% ( 30) 00:09:26.883 9264.527 - 9317.166: 2.4554% ( 22) 00:09:26.883 9317.166 - 9369.806: 2.7636% ( 29) 00:09:26.883 9369.806 - 9422.445: 3.1144% ( 33) 00:09:26.883 9422.445 - 9475.084: 3.4970% ( 36) 00:09:26.883 9475.084 - 9527.724: 3.8265% ( 31) 00:09:26.883 9527.724 - 9580.363: 4.1454% ( 30) 00:09:26.883 9580.363 - 9633.002: 4.4324% ( 27) 00:09:26.883 9633.002 - 9685.642: 4.7194% ( 27) 00:09:26.883 9685.642 - 9738.281: 5.1446% ( 40) 00:09:26.883 9738.281 - 9790.920: 5.3890% ( 23) 00:09:26.883 9790.920 - 9843.560: 5.7717% ( 36) 00:09:26.883 9843.560 - 9896.199: 6.1224% ( 33) 00:09:26.883 9896.199 - 9948.839: 6.4307% ( 29) 00:09:26.883 9948.839 - 10001.478: 6.6752% ( 23) 00:09:26.883 10001.478 - 10054.117: 6.9728% ( 28) 00:09:26.883 10054.117 - 10106.757: 7.2917% ( 30) 00:09:26.883 10106.757 - 10159.396: 7.6743% ( 36) 00:09:26.883 10159.396 - 10212.035: 7.9613% ( 27) 00:09:26.883 10212.035 - 10264.675: 8.3546% ( 37) 00:09:26.883 10264.675 - 10317.314: 8.8648% ( 48) 00:09:26.883 10317.314 - 10369.953: 9.2581% ( 37) 00:09:26.883 10369.953 - 10422.593: 9.6620% ( 38) 00:09:26.883 10422.593 - 10475.232: 10.1297% ( 44) 00:09:26.883 10475.232 - 10527.871: 10.5548% ( 40) 00:09:26.883 10527.871 - 10580.511: 11.0438% ( 46) 00:09:26.883 10580.511 - 10633.150: 11.5965% ( 52) 00:09:26.883 10633.150 - 10685.790: 12.1811% ( 55) 00:09:26.883 10685.790 - 10738.429: 12.8614% ( 64) 00:09:26.883 10738.429 - 10791.068: 13.8499% ( 93) 00:09:26.883 10791.068 - 10843.708: 14.9872% ( 107) 00:09:26.883 10843.708 - 10896.347: 16.3478% ( 128) 00:09:26.883 10896.347 - 10948.986: 17.7402% ( 131) 00:09:26.883 10948.986 - 11001.626: 19.6110% ( 176) 00:09:26.883 11001.626 - 11054.265: 21.7156% ( 198) 00:09:26.883 11054.265 - 11106.904: 24.0327% ( 218) 00:09:26.883 11106.904 - 11159.544: 26.3924% ( 222) 00:09:26.883 11159.544 - 11212.183: 28.5821% ( 206) 00:09:26.883 11212.183 - 11264.822: 30.5272% ( 183) 00:09:26.883 11264.822 - 11317.462: 32.1641% ( 154) 00:09:26.883 11317.462 - 11370.101: 34.2156% ( 193) 00:09:26.883 11370.101 - 11422.741: 35.7781% ( 147) 00:09:26.883 11422.741 - 11475.380: 37.1492% ( 129) 00:09:26.883 11475.380 - 11528.019: 38.4460% ( 122) 00:09:26.883 11528.019 - 11580.659: 39.4558% ( 95) 00:09:26.883 11580.659 - 11633.298: 40.3912% ( 88) 00:09:26.883 11633.298 - 11685.937: 41.2202% ( 78) 00:09:26.883 11685.937 - 11738.577: 41.9537% ( 69) 00:09:26.883 11738.577 - 11791.216: 42.4532% ( 47) 00:09:26.883 11791.216 - 11843.855: 42.8253% ( 35) 00:09:26.883 11843.855 - 11896.495: 43.1441% ( 30) 00:09:26.883 11896.495 - 11949.134: 43.4843% ( 32) 00:09:26.883 11949.134 - 12001.773: 43.9094% ( 40) 00:09:26.883 12001.773 - 12054.413: 44.4090% ( 47) 00:09:26.883 12054.413 - 12107.052: 44.8236% ( 39) 00:09:26.883 12107.052 - 12159.692: 45.3656% ( 51) 00:09:26.883 12159.692 - 12212.331: 45.7483% ( 36) 00:09:26.883 12212.331 - 12264.970: 46.1310% ( 36) 00:09:26.883 12264.970 - 12317.610: 46.7900% ( 62) 00:09:26.883 12317.610 - 12370.249: 47.6935% ( 85) 00:09:26.883 12370.249 - 12422.888: 48.5119% ( 77) 00:09:26.883 12422.888 - 12475.528: 49.3516% ( 79) 00:09:26.883 12475.528 - 12528.167: 50.2232% ( 82) 00:09:26.883 12528.167 - 12580.806: 51.0098% ( 74) 00:09:26.883 12580.806 - 12633.446: 51.8814% ( 82) 00:09:26.883 12633.446 - 12686.085: 52.6573% ( 73) 00:09:26.883 12686.085 - 12738.724: 53.2844% ( 59) 00:09:26.883 12738.724 - 12791.364: 53.9541% ( 63) 00:09:26.883 12791.364 - 12844.003: 54.6769% ( 68) 00:09:26.883 12844.003 - 12896.643: 55.3890% ( 67) 00:09:26.883 12896.643 - 12949.282: 56.0587% ( 63) 00:09:26.883 12949.282 - 13001.921: 56.6433% ( 55) 00:09:26.883 13001.921 - 13054.561: 57.0259% ( 36) 00:09:26.883 13054.561 - 13107.200: 57.4086% ( 36) 00:09:26.883 13107.200 - 13159.839: 57.7381% ( 31) 00:09:26.883 13159.839 - 13212.479: 58.0676% ( 31) 00:09:26.883 13212.479 - 13265.118: 58.5034% ( 41) 00:09:26.883 13265.118 - 13317.757: 59.0349% ( 50) 00:09:26.883 13317.757 - 13370.397: 59.6195% ( 55) 00:09:26.883 13370.397 - 13423.036: 60.4379% ( 77) 00:09:26.883 13423.036 - 13475.676: 61.0119% ( 54) 00:09:26.883 13475.676 - 13580.954: 61.7241% ( 67) 00:09:26.883 13580.954 - 13686.233: 62.5000% ( 73) 00:09:26.883 13686.233 - 13791.512: 63.3610% ( 81) 00:09:26.883 13791.512 - 13896.790: 64.1263% ( 72) 00:09:26.883 13896.790 - 14002.069: 64.6896% ( 53) 00:09:26.883 14002.069 - 14107.348: 65.4018% ( 67) 00:09:26.883 14107.348 - 14212.627: 65.9864% ( 55) 00:09:26.883 14212.627 - 14317.905: 66.4647% ( 45) 00:09:26.883 14317.905 - 14423.184: 66.9749% ( 48) 00:09:26.883 14423.184 - 14528.463: 67.4426% ( 44) 00:09:26.883 14528.463 - 14633.741: 68.0378% ( 56) 00:09:26.883 14633.741 - 14739.020: 68.8138% ( 73) 00:09:26.883 14739.020 - 14844.299: 69.8129% ( 94) 00:09:26.883 14844.299 - 14949.578: 70.8971% ( 102) 00:09:26.883 14949.578 - 15054.856: 71.5986% ( 66) 00:09:26.883 15054.856 - 15160.135: 72.0876% ( 46) 00:09:26.883 15160.135 - 15265.414: 72.5765% ( 46) 00:09:26.883 15265.414 - 15370.692: 73.0442% ( 44) 00:09:26.883 15370.692 - 15475.971: 73.4269% ( 36) 00:09:26.883 15475.971 - 15581.250: 73.8414% ( 39) 00:09:26.883 15581.250 - 15686.529: 74.3941% ( 52) 00:09:26.883 15686.529 - 15791.807: 75.0744% ( 64) 00:09:26.883 15791.807 - 15897.086: 75.9354% ( 81) 00:09:26.883 15897.086 - 16002.365: 76.7645% ( 78) 00:09:26.883 16002.365 - 16107.643: 77.5616% ( 75) 00:09:26.883 16107.643 - 16212.922: 78.3270% ( 72) 00:09:26.883 16212.922 - 16318.201: 79.3899% ( 100) 00:09:26.883 16318.201 - 16423.480: 80.3997% ( 95) 00:09:26.883 16423.480 - 16528.758: 81.2819% ( 83) 00:09:26.883 16528.758 - 16634.037: 82.1216% ( 79) 00:09:26.883 16634.037 - 16739.316: 83.1314% ( 95) 00:09:26.883 16739.316 - 16844.594: 84.0455% ( 86) 00:09:26.883 16844.594 - 16949.873: 84.8108% ( 72) 00:09:26.883 16949.873 - 17055.152: 85.4273% ( 58) 00:09:26.883 17055.152 - 17160.431: 85.9906% ( 53) 00:09:26.883 17160.431 - 17265.709: 86.4583% ( 44) 00:09:26.883 17265.709 - 17370.988: 86.9473% ( 46) 00:09:26.883 17370.988 - 17476.267: 87.4787% ( 50) 00:09:26.883 17476.267 - 17581.545: 87.9571% ( 45) 00:09:26.883 17581.545 - 17686.824: 88.5842% ( 59) 00:09:26.883 17686.824 - 17792.103: 89.3176% ( 69) 00:09:26.883 17792.103 - 17897.382: 89.8065% ( 46) 00:09:26.883 17897.382 - 18002.660: 90.2105% ( 38) 00:09:26.883 18002.660 - 18107.939: 90.6463% ( 41) 00:09:26.883 18107.939 - 18213.218: 91.1458% ( 47) 00:09:26.883 18213.218 - 18318.496: 91.6241% ( 45) 00:09:26.883 18318.496 - 18423.775: 92.0812% ( 43) 00:09:26.883 18423.775 - 18529.054: 92.5064% ( 40) 00:09:26.883 18529.054 - 18634.333: 93.1122% ( 57) 00:09:26.883 18634.333 - 18739.611: 93.6543% ( 51) 00:09:26.883 18739.611 - 18844.890: 94.1964% ( 51) 00:09:26.883 18844.890 - 18950.169: 94.7598% ( 53) 00:09:26.883 18950.169 - 19055.447: 95.2062% ( 42) 00:09:26.883 19055.447 - 19160.726: 95.5889% ( 36) 00:09:26.883 19160.726 - 19266.005: 95.9184% ( 31) 00:09:26.883 19266.005 - 19371.284: 96.2266% ( 29) 00:09:26.883 19371.284 - 19476.562: 96.4923% ( 25) 00:09:26.883 19476.562 - 19581.841: 96.7262% ( 22) 00:09:26.883 19581.841 - 19687.120: 96.9175% ( 18) 00:09:26.883 19687.120 - 19792.398: 97.1088% ( 18) 00:09:26.883 19792.398 - 19897.677: 97.2895% ( 17) 00:09:26.883 19897.677 - 20002.956: 97.4384% ( 14) 00:09:26.883 20002.956 - 20108.235: 97.5659% ( 12) 00:09:26.883 20108.235 - 20213.513: 97.6297% ( 6) 00:09:26.883 20213.513 - 20318.792: 97.6616% ( 3) 00:09:26.884 20318.792 - 20424.071: 97.7041% ( 4) 00:09:26.884 20424.071 - 20529.349: 97.7466% ( 4) 00:09:26.884 20529.349 - 20634.628: 97.7891% ( 4) 00:09:26.884 20634.628 - 20739.907: 97.8316% ( 4) 00:09:26.884 20739.907 - 20845.186: 97.8741% ( 4) 00:09:26.884 20845.186 - 20950.464: 97.9167% ( 4) 00:09:26.884 20950.464 - 21055.743: 97.9486% ( 3) 00:09:26.884 21055.743 - 21161.022: 97.9804% ( 3) 00:09:26.884 21161.022 - 21266.300: 98.0442% ( 6) 00:09:26.884 21266.300 - 21371.579: 98.1080% ( 6) 00:09:26.884 21371.579 - 21476.858: 98.1611% ( 5) 00:09:26.884 21476.858 - 21582.137: 98.2249% ( 6) 00:09:26.884 21582.137 - 21687.415: 98.2781% ( 5) 00:09:26.884 21687.415 - 21792.694: 98.3418% ( 6) 00:09:26.884 21792.694 - 21897.973: 98.4056% ( 6) 00:09:26.884 21897.973 - 22003.251: 98.4694% ( 6) 00:09:26.884 22003.251 - 22108.530: 98.5332% ( 6) 00:09:26.884 22108.530 - 22213.809: 98.5863% ( 5) 00:09:26.884 22213.809 - 22319.088: 98.6395% ( 5) 00:09:26.884 31794.172 - 32004.729: 98.6501% ( 1) 00:09:26.884 32004.729 - 32215.287: 98.6926% ( 4) 00:09:26.884 32215.287 - 32425.844: 98.7564% ( 6) 00:09:26.884 32425.844 - 32636.402: 98.8095% ( 5) 00:09:26.884 32636.402 - 32846.959: 98.8733% ( 6) 00:09:26.884 32846.959 - 33057.516: 98.9158% ( 4) 00:09:26.884 33057.516 - 33268.074: 98.9796% ( 6) 00:09:26.884 33268.074 - 33478.631: 99.0434% ( 6) 00:09:26.884 33478.631 - 33689.189: 99.0965% ( 5) 00:09:26.884 33689.189 - 33899.746: 99.1497% ( 5) 00:09:26.884 33899.746 - 34110.304: 99.2028% ( 5) 00:09:26.884 34110.304 - 34320.861: 99.2666% ( 6) 00:09:26.884 34320.861 - 34531.418: 99.3091% ( 4) 00:09:26.884 34531.418 - 34741.976: 99.3197% ( 1) 00:09:26.884 41690.371 - 41900.929: 99.3622% ( 4) 00:09:26.884 41900.929 - 42111.486: 99.4260% ( 6) 00:09:26.884 42111.486 - 42322.043: 99.4898% ( 6) 00:09:26.884 42322.043 - 42532.601: 99.5536% ( 6) 00:09:26.884 42532.601 - 42743.158: 99.6173% ( 6) 00:09:26.884 42743.158 - 42953.716: 99.6705% ( 5) 00:09:26.884 42953.716 - 43164.273: 99.7343% ( 6) 00:09:26.884 43164.273 - 43374.831: 99.7874% ( 5) 00:09:26.884 43374.831 - 43585.388: 99.8512% ( 6) 00:09:26.884 43585.388 - 43795.945: 99.9043% ( 5) 00:09:26.884 43795.945 - 44006.503: 99.9681% ( 6) 00:09:26.884 44006.503 - 44217.060: 100.0000% ( 3) 00:09:26.884 00:09:26.884 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:26.884 ============================================================================== 00:09:26.884 Range in us Cumulative IO count 00:09:26.884 8738.133 - 8790.773: 0.0213% ( 2) 00:09:26.884 8790.773 - 8843.412: 0.0744% ( 5) 00:09:26.884 8843.412 - 8896.051: 0.1701% ( 9) 00:09:26.884 8896.051 - 8948.691: 0.2657% ( 9) 00:09:26.884 8948.691 - 9001.330: 0.5208% ( 24) 00:09:26.884 9001.330 - 9053.969: 0.7440% ( 21) 00:09:26.884 9053.969 - 9106.609: 0.9566% ( 20) 00:09:26.884 9106.609 - 9159.248: 1.2755% ( 30) 00:09:26.884 9159.248 - 9211.888: 1.6582% ( 36) 00:09:26.884 9211.888 - 9264.527: 1.8601% ( 19) 00:09:26.884 9264.527 - 9317.166: 2.0727% ( 20) 00:09:26.884 9317.166 - 9369.806: 2.3491% ( 26) 00:09:26.884 9369.806 - 9422.445: 2.6892% ( 32) 00:09:26.884 9422.445 - 9475.084: 3.0187% ( 31) 00:09:26.884 9475.084 - 9527.724: 3.3907% ( 35) 00:09:26.884 9527.724 - 9580.363: 4.0391% ( 61) 00:09:26.884 9580.363 - 9633.002: 4.6131% ( 54) 00:09:26.884 9633.002 - 9685.642: 4.8788% ( 25) 00:09:26.884 9685.642 - 9738.281: 5.1977% ( 30) 00:09:26.884 9738.281 - 9790.920: 5.4422% ( 23) 00:09:26.884 9790.920 - 9843.560: 5.7717% ( 31) 00:09:26.884 9843.560 - 9896.199: 6.1969% ( 40) 00:09:26.884 9896.199 - 9948.839: 6.6008% ( 38) 00:09:26.884 9948.839 - 10001.478: 6.9196% ( 30) 00:09:26.884 10001.478 - 10054.117: 7.3342% ( 39) 00:09:26.884 10054.117 - 10106.757: 7.5893% ( 24) 00:09:26.884 10106.757 - 10159.396: 7.7806% ( 18) 00:09:26.884 10159.396 - 10212.035: 7.9719% ( 18) 00:09:26.884 10212.035 - 10264.675: 8.1526% ( 17) 00:09:26.884 10264.675 - 10317.314: 8.3759% ( 21) 00:09:26.884 10317.314 - 10369.953: 8.6416% ( 25) 00:09:26.884 10369.953 - 10422.593: 9.0986% ( 43) 00:09:26.884 10422.593 - 10475.232: 9.5770% ( 45) 00:09:26.884 10475.232 - 10527.871: 10.2147% ( 60) 00:09:26.884 10527.871 - 10580.511: 11.0757% ( 81) 00:09:26.884 10580.511 - 10633.150: 11.8410% ( 72) 00:09:26.884 10633.150 - 10685.790: 12.8082% ( 91) 00:09:26.884 10685.790 - 10738.429: 13.7755% ( 91) 00:09:26.884 10738.429 - 10791.068: 14.7109% ( 88) 00:09:26.884 10791.068 - 10843.708: 15.5719% ( 81) 00:09:26.884 10843.708 - 10896.347: 16.8580% ( 121) 00:09:26.884 10896.347 - 10948.986: 18.6756% ( 171) 00:09:26.884 10948.986 - 11001.626: 20.3338% ( 156) 00:09:26.884 11001.626 - 11054.265: 21.8325% ( 141) 00:09:26.884 11054.265 - 11106.904: 23.7564% ( 181) 00:09:26.884 11106.904 - 11159.544: 26.3393% ( 243) 00:09:26.884 11159.544 - 11212.183: 28.4014% ( 194) 00:09:26.884 11212.183 - 11264.822: 30.3890% ( 187) 00:09:26.884 11264.822 - 11317.462: 32.2385% ( 174) 00:09:26.884 11317.462 - 11370.101: 33.8861% ( 155) 00:09:26.884 11370.101 - 11422.741: 35.3848% ( 141) 00:09:26.884 11422.741 - 11475.380: 36.4264% ( 98) 00:09:26.884 11475.380 - 11528.019: 37.5531% ( 106) 00:09:26.884 11528.019 - 11580.659: 38.6798% ( 106) 00:09:26.884 11580.659 - 11633.298: 39.7109% ( 97) 00:09:26.884 11633.298 - 11685.937: 40.4549% ( 70) 00:09:26.884 11685.937 - 11738.577: 41.2096% ( 71) 00:09:26.884 11738.577 - 11791.216: 41.7411% ( 50) 00:09:26.884 11791.216 - 11843.855: 42.1025% ( 34) 00:09:26.884 11843.855 - 11896.495: 42.5595% ( 43) 00:09:26.884 11896.495 - 11949.134: 43.0272% ( 44) 00:09:26.884 11949.134 - 12001.773: 43.5162% ( 46) 00:09:26.884 12001.773 - 12054.413: 44.0689% ( 52) 00:09:26.884 12054.413 - 12107.052: 44.7810% ( 67) 00:09:26.884 12107.052 - 12159.692: 45.3550% ( 54) 00:09:26.884 12159.692 - 12212.331: 45.8333% ( 45) 00:09:26.884 12212.331 - 12264.970: 46.3435% ( 48) 00:09:26.884 12264.970 - 12317.610: 47.2895% ( 89) 00:09:26.884 12317.610 - 12370.249: 47.9486% ( 62) 00:09:26.884 12370.249 - 12422.888: 48.4694% ( 49) 00:09:26.884 12422.888 - 12475.528: 49.0540% ( 55) 00:09:26.884 12475.528 - 12528.167: 49.8193% ( 72) 00:09:26.884 12528.167 - 12580.806: 50.3720% ( 52) 00:09:26.884 12580.806 - 12633.446: 51.1054% ( 69) 00:09:26.884 12633.446 - 12686.085: 51.6794% ( 54) 00:09:26.884 12686.085 - 12738.724: 52.1365% ( 43) 00:09:26.884 12738.724 - 12791.364: 52.4554% ( 30) 00:09:26.884 12791.364 - 12844.003: 52.8380% ( 36) 00:09:26.884 12844.003 - 12896.643: 53.1356% ( 28) 00:09:26.884 12896.643 - 12949.282: 53.4970% ( 34) 00:09:26.884 12949.282 - 13001.921: 53.9753% ( 45) 00:09:26.884 13001.921 - 13054.561: 54.4537% ( 45) 00:09:26.884 13054.561 - 13107.200: 54.9001% ( 42) 00:09:26.884 13107.200 - 13159.839: 55.4953% ( 56) 00:09:26.884 13159.839 - 13212.479: 56.1118% ( 58) 00:09:26.884 13212.479 - 13265.118: 56.6752% ( 53) 00:09:26.884 13265.118 - 13317.757: 57.1641% ( 46) 00:09:26.884 13317.757 - 13370.397: 57.6849% ( 49) 00:09:26.884 13370.397 - 13423.036: 58.1739% ( 46) 00:09:26.884 13423.036 - 13475.676: 58.8648% ( 65) 00:09:26.884 13475.676 - 13580.954: 59.8533% ( 93) 00:09:26.884 13580.954 - 13686.233: 60.8312% ( 92) 00:09:26.884 13686.233 - 13791.512: 61.8622% ( 97) 00:09:26.884 13791.512 - 13896.790: 62.6382% ( 73) 00:09:26.884 13896.790 - 14002.069: 63.4141% ( 73) 00:09:26.884 14002.069 - 14107.348: 64.2751% ( 81) 00:09:26.884 14107.348 - 14212.627: 65.0510% ( 73) 00:09:26.884 14212.627 - 14317.905: 66.0183% ( 91) 00:09:26.884 14317.905 - 14423.184: 66.9749% ( 90) 00:09:26.884 14423.184 - 14528.463: 67.4851% ( 48) 00:09:26.884 14528.463 - 14633.741: 68.0166% ( 50) 00:09:26.884 14633.741 - 14739.020: 68.7075% ( 65) 00:09:26.884 14739.020 - 14844.299: 69.5578% ( 80) 00:09:26.884 14844.299 - 14949.578: 70.5357% ( 92) 00:09:26.884 14949.578 - 15054.856: 71.5561% ( 96) 00:09:26.884 15054.856 - 15160.135: 72.5021% ( 89) 00:09:26.884 15160.135 - 15265.414: 73.4694% ( 91) 00:09:26.884 15265.414 - 15370.692: 74.1284% ( 62) 00:09:26.885 15370.692 - 15475.971: 74.9150% ( 74) 00:09:26.885 15475.971 - 15581.250: 75.5634% ( 61) 00:09:26.885 15581.250 - 15686.529: 76.3924% ( 78) 00:09:26.885 15686.529 - 15791.807: 77.1365% ( 70) 00:09:26.885 15791.807 - 15897.086: 77.9762% ( 79) 00:09:26.885 15897.086 - 16002.365: 78.7521% ( 73) 00:09:26.885 16002.365 - 16107.643: 79.6237% ( 82) 00:09:26.885 16107.643 - 16212.922: 80.3040% ( 64) 00:09:26.885 16212.922 - 16318.201: 80.9524% ( 61) 00:09:26.885 16318.201 - 16423.480: 81.7496% ( 75) 00:09:26.885 16423.480 - 16528.758: 82.6849% ( 88) 00:09:26.885 16528.758 - 16634.037: 83.3014% ( 58) 00:09:26.885 16634.037 - 16739.316: 83.8116% ( 48) 00:09:26.885 16739.316 - 16844.594: 84.3325% ( 49) 00:09:26.885 16844.594 - 16949.873: 85.0659% ( 69) 00:09:26.885 16949.873 - 17055.152: 85.7462% ( 64) 00:09:26.885 17055.152 - 17160.431: 86.5221% ( 73) 00:09:26.885 17160.431 - 17265.709: 87.1599% ( 60) 00:09:26.885 17265.709 - 17370.988: 87.8614% ( 66) 00:09:26.885 17370.988 - 17476.267: 88.2866% ( 40) 00:09:26.885 17476.267 - 17581.545: 88.7436% ( 43) 00:09:26.885 17581.545 - 17686.824: 89.1050% ( 34) 00:09:26.885 17686.824 - 17792.103: 89.5302% ( 40) 00:09:26.885 17792.103 - 17897.382: 90.0085% ( 45) 00:09:26.885 17897.382 - 18002.660: 90.3912% ( 36) 00:09:26.885 18002.660 - 18107.939: 90.7632% ( 35) 00:09:26.885 18107.939 - 18213.218: 91.1777% ( 39) 00:09:26.885 18213.218 - 18318.496: 91.5710% ( 37) 00:09:26.885 18318.496 - 18423.775: 91.9855% ( 39) 00:09:26.885 18423.775 - 18529.054: 92.5064% ( 49) 00:09:26.885 18529.054 - 18634.333: 92.9847% ( 45) 00:09:26.885 18634.333 - 18739.611: 93.4311% ( 42) 00:09:26.885 18739.611 - 18844.890: 93.8138% ( 36) 00:09:26.885 18844.890 - 18950.169: 94.3452% ( 50) 00:09:26.885 18950.169 - 19055.447: 94.9298% ( 55) 00:09:26.885 19055.447 - 19160.726: 95.4826% ( 52) 00:09:26.885 19160.726 - 19266.005: 95.8440% ( 34) 00:09:26.885 19266.005 - 19371.284: 96.2266% ( 36) 00:09:26.885 19371.284 - 19476.562: 96.5455% ( 30) 00:09:26.885 19476.562 - 19581.841: 96.9069% ( 34) 00:09:26.885 19581.841 - 19687.120: 97.3958% ( 46) 00:09:26.885 19687.120 - 19792.398: 97.7041% ( 29) 00:09:26.885 19792.398 - 19897.677: 97.9060% ( 19) 00:09:26.885 19897.677 - 20002.956: 98.0548% ( 14) 00:09:26.885 20002.956 - 20108.235: 98.2037% ( 14) 00:09:26.885 20108.235 - 20213.513: 98.3418% ( 13) 00:09:26.885 20213.513 - 20318.792: 98.4588% ( 11) 00:09:26.885 20318.792 - 20424.071: 98.5651% ( 10) 00:09:26.885 20424.071 - 20529.349: 98.6288% ( 6) 00:09:26.885 20529.349 - 20634.628: 98.6395% ( 1) 00:09:26.885 31162.500 - 31373.057: 98.6501% ( 1) 00:09:26.885 31373.057 - 31583.614: 98.7032% ( 5) 00:09:26.885 31583.614 - 31794.172: 98.7670% ( 6) 00:09:26.885 31794.172 - 32004.729: 98.8308% ( 6) 00:09:26.885 32004.729 - 32215.287: 98.8733% ( 4) 00:09:26.885 32215.287 - 32425.844: 98.9264% ( 5) 00:09:26.885 32425.844 - 32636.402: 98.9902% ( 6) 00:09:26.885 32636.402 - 32846.959: 99.0540% ( 6) 00:09:26.885 32846.959 - 33057.516: 99.1178% ( 6) 00:09:26.885 33057.516 - 33268.074: 99.1815% ( 6) 00:09:26.885 33268.074 - 33478.631: 99.2453% ( 6) 00:09:26.885 33478.631 - 33689.189: 99.3091% ( 6) 00:09:26.885 33689.189 - 33899.746: 99.3197% ( 1) 00:09:26.885 41058.699 - 41269.256: 99.3729% ( 5) 00:09:26.885 41269.256 - 41479.814: 99.4260% ( 5) 00:09:26.885 41479.814 - 41690.371: 99.4898% ( 6) 00:09:26.885 41690.371 - 41900.929: 99.5429% ( 5) 00:09:26.885 41900.929 - 42111.486: 99.6067% ( 6) 00:09:26.885 42111.486 - 42322.043: 99.6599% ( 5) 00:09:26.885 42322.043 - 42532.601: 99.7236% ( 6) 00:09:26.885 42532.601 - 42743.158: 99.7874% ( 6) 00:09:26.885 42743.158 - 42953.716: 99.8406% ( 5) 00:09:26.885 42953.716 - 43164.273: 99.9043% ( 6) 00:09:26.885 43164.273 - 43374.831: 99.9575% ( 5) 00:09:26.885 43374.831 - 43585.388: 100.0000% ( 4) 00:09:26.885 00:09:26.885 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:26.885 ============================================================================== 00:09:26.885 Range in us Cumulative IO count 00:09:26.885 8843.412 - 8896.051: 0.0850% ( 8) 00:09:26.885 8896.051 - 8948.691: 0.2126% ( 12) 00:09:26.885 8948.691 - 9001.330: 0.4039% ( 18) 00:09:26.885 9001.330 - 9053.969: 0.6909% ( 27) 00:09:26.885 9053.969 - 9106.609: 1.1798% ( 46) 00:09:26.885 9106.609 - 9159.248: 1.4456% ( 25) 00:09:26.885 9159.248 - 9211.888: 1.6050% ( 15) 00:09:26.885 9211.888 - 9264.527: 1.8070% ( 19) 00:09:26.885 9264.527 - 9317.166: 2.0727% ( 25) 00:09:26.885 9317.166 - 9369.806: 2.4235% ( 33) 00:09:26.885 9369.806 - 9422.445: 2.8380% ( 39) 00:09:26.885 9422.445 - 9475.084: 3.1781% ( 32) 00:09:26.885 9475.084 - 9527.724: 3.5821% ( 38) 00:09:26.885 9527.724 - 9580.363: 4.2092% ( 59) 00:09:26.885 9580.363 - 9633.002: 4.6344% ( 40) 00:09:26.885 9633.002 - 9685.642: 5.2190% ( 55) 00:09:26.885 9685.642 - 9738.281: 5.6441% ( 40) 00:09:26.885 9738.281 - 9790.920: 6.0162% ( 35) 00:09:26.885 9790.920 - 9843.560: 6.3776% ( 34) 00:09:26.885 9843.560 - 9896.199: 6.6433% ( 25) 00:09:26.885 9896.199 - 9948.839: 6.9196% ( 26) 00:09:26.885 9948.839 - 10001.478: 7.1854% ( 25) 00:09:26.885 10001.478 - 10054.117: 7.4405% ( 24) 00:09:26.885 10054.117 - 10106.757: 7.7594% ( 30) 00:09:26.885 10106.757 - 10159.396: 8.1101% ( 33) 00:09:26.885 10159.396 - 10212.035: 8.4821% ( 35) 00:09:26.885 10212.035 - 10264.675: 8.9073% ( 40) 00:09:26.885 10264.675 - 10317.314: 9.3431% ( 41) 00:09:26.885 10317.314 - 10369.953: 9.9596% ( 58) 00:09:26.885 10369.953 - 10422.593: 10.5974% ( 60) 00:09:26.885 10422.593 - 10475.232: 11.2139% ( 58) 00:09:26.885 10475.232 - 10527.871: 11.8410% ( 59) 00:09:26.885 10527.871 - 10580.511: 12.3299% ( 46) 00:09:26.885 10580.511 - 10633.150: 12.7870% ( 43) 00:09:26.885 10633.150 - 10685.790: 13.4035% ( 58) 00:09:26.885 10685.790 - 10738.429: 14.2645% ( 81) 00:09:26.885 10738.429 - 10791.068: 15.0085% ( 70) 00:09:26.885 10791.068 - 10843.708: 15.8163% ( 76) 00:09:26.885 10843.708 - 10896.347: 16.8793% ( 100) 00:09:26.885 10896.347 - 10948.986: 18.1760% ( 122) 00:09:26.885 10948.986 - 11001.626: 19.9724% ( 169) 00:09:26.885 11001.626 - 11054.265: 22.1939% ( 209) 00:09:26.885 11054.265 - 11106.904: 24.0434% ( 174) 00:09:26.885 11106.904 - 11159.544: 26.1586% ( 199) 00:09:26.885 11159.544 - 11212.183: 28.4758% ( 218) 00:09:26.885 11212.183 - 11264.822: 30.5910% ( 199) 00:09:26.885 11264.822 - 11317.462: 32.6424% ( 193) 00:09:26.885 11317.462 - 11370.101: 34.5982% ( 184) 00:09:26.885 11370.101 - 11422.741: 35.8525% ( 118) 00:09:26.885 11422.741 - 11475.380: 37.0217% ( 110) 00:09:26.885 11475.380 - 11528.019: 38.0208% ( 94) 00:09:26.885 11528.019 - 11580.659: 39.0944% ( 101) 00:09:26.885 11580.659 - 11633.298: 39.9979% ( 85) 00:09:26.885 11633.298 - 11685.937: 40.6781% ( 64) 00:09:26.885 11685.937 - 11738.577: 41.2628% ( 55) 00:09:26.885 11738.577 - 11791.216: 41.7942% ( 50) 00:09:26.885 11791.216 - 11843.855: 42.1981% ( 38) 00:09:26.885 11843.855 - 11896.495: 42.6552% ( 43) 00:09:26.885 11896.495 - 11949.134: 42.9953% ( 32) 00:09:26.885 11949.134 - 12001.773: 43.3567% ( 34) 00:09:26.885 12001.773 - 12054.413: 43.7181% ( 34) 00:09:26.885 12054.413 - 12107.052: 44.2921% ( 54) 00:09:26.885 12107.052 - 12159.692: 44.9192% ( 59) 00:09:26.885 12159.692 - 12212.331: 45.5782% ( 62) 00:09:26.885 12212.331 - 12264.970: 46.3329% ( 71) 00:09:26.885 12264.970 - 12317.610: 46.9494% ( 58) 00:09:26.885 12317.610 - 12370.249: 47.5021% ( 52) 00:09:26.885 12370.249 - 12422.888: 48.1718% ( 63) 00:09:26.885 12422.888 - 12475.528: 48.9052% ( 69) 00:09:26.885 12475.528 - 12528.167: 49.7343% ( 78) 00:09:26.885 12528.167 - 12580.806: 50.4783% ( 70) 00:09:26.885 12580.806 - 12633.446: 51.2861% ( 76) 00:09:26.885 12633.446 - 12686.085: 52.0621% ( 73) 00:09:26.885 12686.085 - 12738.724: 52.6892% ( 59) 00:09:26.885 12738.724 - 12791.364: 53.2207% ( 50) 00:09:26.885 12791.364 - 12844.003: 53.7840% ( 53) 00:09:26.885 12844.003 - 12896.643: 54.2836% ( 47) 00:09:26.885 12896.643 - 12949.282: 54.8469% ( 53) 00:09:26.885 12949.282 - 13001.921: 55.2934% ( 42) 00:09:26.885 13001.921 - 13054.561: 55.6866% ( 37) 00:09:26.885 13054.561 - 13107.200: 55.9418% ( 24) 00:09:26.885 13107.200 - 13159.839: 56.2819% ( 32) 00:09:26.885 13159.839 - 13212.479: 56.5051% ( 21) 00:09:26.885 13212.479 - 13265.118: 56.7283% ( 21) 00:09:26.885 13265.118 - 13317.757: 57.0047% ( 26) 00:09:26.885 13317.757 - 13370.397: 57.2917% ( 27) 00:09:26.885 13370.397 - 13423.036: 57.7168% ( 40) 00:09:26.885 13423.036 - 13475.676: 58.1739% ( 43) 00:09:26.885 13475.676 - 13580.954: 58.7798% ( 57) 00:09:26.885 13580.954 - 13686.233: 59.3856% ( 57) 00:09:26.885 13686.233 - 13791.512: 59.9596% ( 54) 00:09:26.885 13791.512 - 13896.790: 60.8099% ( 80) 00:09:26.885 13896.790 - 14002.069: 61.9366% ( 106) 00:09:26.885 14002.069 - 14107.348: 62.7657% ( 78) 00:09:26.885 14107.348 - 14212.627: 63.6905% ( 87) 00:09:26.885 14212.627 - 14317.905: 64.6046% ( 86) 00:09:26.885 14317.905 - 14423.184: 65.7313% ( 106) 00:09:26.885 14423.184 - 14528.463: 66.6135% ( 83) 00:09:26.885 14528.463 - 14633.741: 67.8571% ( 117) 00:09:26.885 14633.741 - 14739.020: 69.1008% ( 117) 00:09:26.885 14739.020 - 14844.299: 70.2806% ( 111) 00:09:26.885 14844.299 - 14949.578: 71.1310% ( 80) 00:09:26.885 14949.578 - 15054.856: 71.9281% ( 75) 00:09:26.885 15054.856 - 15160.135: 72.8104% ( 83) 00:09:26.885 15160.135 - 15265.414: 73.6288% ( 77) 00:09:26.885 15265.414 - 15370.692: 74.5004% ( 82) 00:09:26.885 15370.692 - 15475.971: 75.2870% ( 74) 00:09:26.885 15475.971 - 15581.250: 76.1161% ( 78) 00:09:26.885 15581.250 - 15686.529: 76.8707% ( 71) 00:09:26.885 15686.529 - 15791.807: 77.6573% ( 74) 00:09:26.885 15791.807 - 15897.086: 78.5289% ( 82) 00:09:26.885 15897.086 - 16002.365: 79.2730% ( 70) 00:09:26.885 16002.365 - 16107.643: 79.9957% ( 68) 00:09:26.885 16107.643 - 16212.922: 80.7398% ( 70) 00:09:26.885 16212.922 - 16318.201: 81.3244% ( 55) 00:09:26.885 16318.201 - 16423.480: 81.9090% ( 55) 00:09:26.885 16423.480 - 16528.758: 82.3236% ( 39) 00:09:26.885 16528.758 - 16634.037: 82.7594% ( 41) 00:09:26.885 16634.037 - 16739.316: 83.1739% ( 39) 00:09:26.886 16739.316 - 16844.594: 83.7372% ( 53) 00:09:26.886 16844.594 - 16949.873: 84.1624% ( 40) 00:09:26.886 16949.873 - 17055.152: 84.5982% ( 41) 00:09:26.886 17055.152 - 17160.431: 85.0978% ( 47) 00:09:26.886 17160.431 - 17265.709: 85.6930% ( 56) 00:09:26.886 17265.709 - 17370.988: 86.4902% ( 75) 00:09:26.886 17370.988 - 17476.267: 87.1918% ( 66) 00:09:26.886 17476.267 - 17581.545: 87.9783% ( 74) 00:09:26.886 17581.545 - 17686.824: 88.6905% ( 67) 00:09:26.886 17686.824 - 17792.103: 89.6471% ( 90) 00:09:26.886 17792.103 - 17897.382: 90.4124% ( 72) 00:09:26.886 17897.382 - 18002.660: 91.1671% ( 71) 00:09:26.886 18002.660 - 18107.939: 91.7730% ( 57) 00:09:26.886 18107.939 - 18213.218: 92.2194% ( 42) 00:09:26.886 18213.218 - 18318.496: 92.5808% ( 34) 00:09:26.886 18318.496 - 18423.775: 92.9209% ( 32) 00:09:26.886 18423.775 - 18529.054: 93.2717% ( 33) 00:09:26.886 18529.054 - 18634.333: 93.6437% ( 35) 00:09:26.886 18634.333 - 18739.611: 93.9732% ( 31) 00:09:26.886 18739.611 - 18844.890: 94.4090% ( 41) 00:09:26.886 18844.890 - 18950.169: 94.6960% ( 27) 00:09:26.886 18950.169 - 19055.447: 94.9617% ( 25) 00:09:26.886 19055.447 - 19160.726: 95.3656% ( 38) 00:09:26.886 19160.726 - 19266.005: 95.6739% ( 29) 00:09:26.886 19266.005 - 19371.284: 95.9715% ( 28) 00:09:26.886 19371.284 - 19476.562: 96.2372% ( 25) 00:09:26.886 19476.562 - 19581.841: 96.4498% ( 20) 00:09:26.886 19581.841 - 19687.120: 96.7581% ( 29) 00:09:26.886 19687.120 - 19792.398: 97.2045% ( 42) 00:09:26.886 19792.398 - 19897.677: 97.7360% ( 50) 00:09:26.886 19897.677 - 20002.956: 98.0336% ( 28) 00:09:26.886 20002.956 - 20108.235: 98.4162% ( 36) 00:09:26.886 20108.235 - 20213.513: 98.5438% ( 12) 00:09:26.886 20213.513 - 20318.792: 98.6182% ( 7) 00:09:26.886 20318.792 - 20424.071: 98.6395% ( 2) 00:09:26.886 29899.155 - 30109.712: 98.6713% ( 3) 00:09:26.886 30109.712 - 30320.270: 98.7245% ( 5) 00:09:26.886 30320.270 - 30530.827: 98.7776% ( 5) 00:09:26.886 30530.827 - 30741.385: 98.8308% ( 5) 00:09:26.886 30741.385 - 30951.942: 98.8946% ( 6) 00:09:26.886 30951.942 - 31162.500: 98.9477% ( 5) 00:09:26.886 31162.500 - 31373.057: 99.0115% ( 6) 00:09:26.886 31373.057 - 31583.614: 99.0753% ( 6) 00:09:26.886 31583.614 - 31794.172: 99.1284% ( 5) 00:09:26.886 31794.172 - 32004.729: 99.1922% ( 6) 00:09:26.886 32004.729 - 32215.287: 99.2560% ( 6) 00:09:26.886 32215.287 - 32425.844: 99.3091% ( 5) 00:09:26.886 32425.844 - 32636.402: 99.3197% ( 1) 00:09:26.886 39795.354 - 40005.912: 99.3729% ( 5) 00:09:26.886 40005.912 - 40216.469: 99.4260% ( 5) 00:09:26.886 40216.469 - 40427.027: 99.5004% ( 7) 00:09:26.886 40427.027 - 40637.584: 99.5642% ( 6) 00:09:26.886 40637.584 - 40848.141: 99.6173% ( 5) 00:09:26.886 40848.141 - 41058.699: 99.6705% ( 5) 00:09:26.886 41058.699 - 41269.256: 99.7343% ( 6) 00:09:26.886 41269.256 - 41479.814: 99.7980% ( 6) 00:09:26.886 41479.814 - 41690.371: 99.8618% ( 6) 00:09:26.886 41690.371 - 41900.929: 99.9256% ( 6) 00:09:26.886 41900.929 - 42111.486: 99.9894% ( 6) 00:09:26.886 42111.486 - 42322.043: 100.0000% ( 1) 00:09:26.886 00:09:26.886 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:26.886 ============================================================================== 00:09:26.886 Range in us Cumulative IO count 00:09:26.886 8632.855 - 8685.494: 0.0425% ( 4) 00:09:26.886 8685.494 - 8738.133: 0.0957% ( 5) 00:09:26.886 8738.133 - 8790.773: 0.1382% ( 4) 00:09:26.886 8790.773 - 8843.412: 0.1913% ( 5) 00:09:26.886 8843.412 - 8896.051: 0.2445% ( 5) 00:09:26.886 8896.051 - 8948.691: 0.4677% ( 21) 00:09:26.886 8948.691 - 9001.330: 0.6059% ( 13) 00:09:26.886 9001.330 - 9053.969: 0.7122% ( 10) 00:09:26.886 9053.969 - 9106.609: 0.9673% ( 24) 00:09:26.886 9106.609 - 9159.248: 1.1054% ( 13) 00:09:26.886 9159.248 - 9211.888: 1.2649% ( 15) 00:09:26.886 9211.888 - 9264.527: 1.4775% ( 20) 00:09:26.886 9264.527 - 9317.166: 1.6901% ( 20) 00:09:26.886 9317.166 - 9369.806: 1.8814% ( 18) 00:09:26.886 9369.806 - 9422.445: 2.1896% ( 29) 00:09:26.886 9422.445 - 9475.084: 2.4979% ( 29) 00:09:26.886 9475.084 - 9527.724: 2.9656% ( 44) 00:09:26.886 9527.724 - 9580.363: 3.5502% ( 55) 00:09:26.886 9580.363 - 9633.002: 4.1773% ( 59) 00:09:26.886 9633.002 - 9685.642: 4.7725% ( 56) 00:09:26.886 9685.642 - 9738.281: 5.3146% ( 51) 00:09:26.886 9738.281 - 9790.920: 5.6016% ( 27) 00:09:26.886 9790.920 - 9843.560: 5.8461% ( 23) 00:09:26.886 9843.560 - 9896.199: 6.1969% ( 33) 00:09:26.886 9896.199 - 9948.839: 6.4732% ( 26) 00:09:26.886 9948.839 - 10001.478: 6.6433% ( 16) 00:09:26.886 10001.478 - 10054.117: 6.9196% ( 26) 00:09:26.886 10054.117 - 10106.757: 7.1960% ( 26) 00:09:26.886 10106.757 - 10159.396: 7.5255% ( 31) 00:09:26.886 10159.396 - 10212.035: 7.9826% ( 43) 00:09:26.886 10212.035 - 10264.675: 8.4077% ( 40) 00:09:26.886 10264.675 - 10317.314: 9.0455% ( 60) 00:09:26.886 10317.314 - 10369.953: 9.4919% ( 42) 00:09:26.886 10369.953 - 10422.593: 10.0340% ( 51) 00:09:26.886 10422.593 - 10475.232: 10.5017% ( 44) 00:09:26.886 10475.232 - 10527.871: 10.9588% ( 43) 00:09:26.886 10527.871 - 10580.511: 11.5540% ( 56) 00:09:26.886 10580.511 - 10633.150: 12.1599% ( 57) 00:09:26.886 10633.150 - 10685.790: 12.9889% ( 78) 00:09:26.886 10685.790 - 10738.429: 13.8712% ( 83) 00:09:26.886 10738.429 - 10791.068: 14.6471% ( 73) 00:09:26.886 10791.068 - 10843.708: 15.5825% ( 88) 00:09:26.886 10843.708 - 10896.347: 16.9643% ( 130) 00:09:26.886 10896.347 - 10948.986: 18.6012% ( 154) 00:09:26.886 10948.986 - 11001.626: 20.2275% ( 153) 00:09:26.886 11001.626 - 11054.265: 22.0876% ( 175) 00:09:26.886 11054.265 - 11106.904: 24.5217% ( 229) 00:09:26.886 11106.904 - 11159.544: 26.7538% ( 210) 00:09:26.886 11159.544 - 11212.183: 28.6458% ( 178) 00:09:26.886 11212.183 - 11264.822: 30.5804% ( 182) 00:09:26.886 11264.822 - 11317.462: 32.5574% ( 186) 00:09:26.886 11317.462 - 11370.101: 34.2262% ( 157) 00:09:26.886 11370.101 - 11422.741: 35.7249% ( 141) 00:09:26.886 11422.741 - 11475.380: 37.0429% ( 124) 00:09:26.886 11475.380 - 11528.019: 38.3716% ( 125) 00:09:26.886 11528.019 - 11580.659: 39.4877% ( 105) 00:09:26.886 11580.659 - 11633.298: 40.2317% ( 70) 00:09:26.886 11633.298 - 11685.937: 40.9545% ( 68) 00:09:26.886 11685.937 - 11738.577: 41.7411% ( 74) 00:09:26.886 11738.577 - 11791.216: 42.2194% ( 45) 00:09:26.886 11791.216 - 11843.855: 42.7615% ( 51) 00:09:26.886 11843.855 - 11896.495: 43.1760% ( 39) 00:09:26.886 11896.495 - 11949.134: 43.5268% ( 33) 00:09:26.886 11949.134 - 12001.773: 43.9094% ( 36) 00:09:26.886 12001.773 - 12054.413: 44.3771% ( 44) 00:09:26.886 12054.413 - 12107.052: 44.9298% ( 52) 00:09:26.886 12107.052 - 12159.692: 45.4826% ( 52) 00:09:26.886 12159.692 - 12212.331: 46.2372% ( 71) 00:09:26.886 12212.331 - 12264.970: 46.8219% ( 55) 00:09:26.886 12264.970 - 12317.610: 47.3108% ( 46) 00:09:26.886 12317.610 - 12370.249: 47.7891% ( 45) 00:09:26.886 12370.249 - 12422.888: 48.4588% ( 63) 00:09:26.886 12422.888 - 12475.528: 49.2878% ( 78) 00:09:26.886 12475.528 - 12528.167: 49.9362% ( 61) 00:09:26.886 12528.167 - 12580.806: 50.6590% ( 68) 00:09:26.886 12580.806 - 12633.446: 51.3287% ( 63) 00:09:26.886 12633.446 - 12686.085: 52.1365% ( 76) 00:09:26.886 12686.085 - 12738.724: 52.8061% ( 63) 00:09:26.886 12738.724 - 12791.364: 53.3907% ( 55) 00:09:26.886 12791.364 - 12844.003: 54.0072% ( 58) 00:09:26.886 12844.003 - 12896.643: 54.4218% ( 39) 00:09:26.886 12896.643 - 12949.282: 54.9107% ( 46) 00:09:26.886 12949.282 - 13001.921: 55.3678% ( 43) 00:09:26.886 13001.921 - 13054.561: 55.8567% ( 46) 00:09:26.886 13054.561 - 13107.200: 56.5795% ( 68) 00:09:26.886 13107.200 - 13159.839: 56.9515% ( 35) 00:09:26.886 13159.839 - 13212.479: 57.3873% ( 41) 00:09:26.886 13212.479 - 13265.118: 57.6743% ( 27) 00:09:26.886 13265.118 - 13317.757: 58.0570% ( 36) 00:09:26.886 13317.757 - 13370.397: 58.3971% ( 32) 00:09:26.886 13370.397 - 13423.036: 58.7372% ( 32) 00:09:26.886 13423.036 - 13475.676: 58.9817% ( 23) 00:09:26.886 13475.676 - 13580.954: 59.5132% ( 50) 00:09:26.886 13580.954 - 13686.233: 60.0021% ( 46) 00:09:26.886 13686.233 - 13791.512: 60.5548% ( 52) 00:09:26.886 13791.512 - 13896.790: 61.2457% ( 65) 00:09:26.886 13896.790 - 14002.069: 62.2768% ( 97) 00:09:26.886 14002.069 - 14107.348: 63.0102% ( 69) 00:09:26.886 14107.348 - 14212.627: 63.9137% ( 85) 00:09:26.886 14212.627 - 14317.905: 65.1148% ( 113) 00:09:26.886 14317.905 - 14423.184: 65.8588% ( 70) 00:09:26.886 14423.184 - 14528.463: 66.7942% ( 88) 00:09:26.886 14528.463 - 14633.741: 67.5276% ( 69) 00:09:26.886 14633.741 - 14739.020: 68.2929% ( 72) 00:09:26.887 14739.020 - 14844.299: 69.1220% ( 78) 00:09:26.887 14844.299 - 14949.578: 70.2912% ( 110) 00:09:26.887 14949.578 - 15054.856: 71.2691% ( 92) 00:09:26.887 15054.856 - 15160.135: 71.9813% ( 67) 00:09:26.887 15160.135 - 15265.414: 72.9486% ( 91) 00:09:26.887 15265.414 - 15370.692: 74.0965% ( 108) 00:09:26.887 15370.692 - 15475.971: 75.3508% ( 118) 00:09:26.887 15475.971 - 15581.250: 76.3499% ( 94) 00:09:26.887 15581.250 - 15686.529: 77.2853% ( 88) 00:09:26.887 15686.529 - 15791.807: 77.9337% ( 61) 00:09:26.887 15791.807 - 15897.086: 78.5183% ( 55) 00:09:26.887 15897.086 - 16002.365: 79.1241% ( 57) 00:09:26.887 16002.365 - 16107.643: 79.6450% ( 49) 00:09:26.887 16107.643 - 16212.922: 80.0383% ( 37) 00:09:26.887 16212.922 - 16318.201: 80.3784% ( 32) 00:09:26.887 16318.201 - 16423.480: 80.8886% ( 48) 00:09:26.887 16423.480 - 16528.758: 81.6114% ( 68) 00:09:26.887 16528.758 - 16634.037: 82.3023% ( 65) 00:09:26.887 16634.037 - 16739.316: 82.9401% ( 60) 00:09:26.887 16739.316 - 16844.594: 83.9392% ( 94) 00:09:26.887 16844.594 - 16949.873: 84.4281% ( 46) 00:09:26.887 16949.873 - 17055.152: 84.9171% ( 46) 00:09:26.887 17055.152 - 17160.431: 85.3635% ( 42) 00:09:26.887 17160.431 - 17265.709: 85.8737% ( 48) 00:09:26.887 17265.709 - 17370.988: 86.3839% ( 48) 00:09:26.887 17370.988 - 17476.267: 86.9473% ( 53) 00:09:26.887 17476.267 - 17581.545: 87.4575% ( 48) 00:09:26.887 17581.545 - 17686.824: 87.9889% ( 50) 00:09:26.887 17686.824 - 17792.103: 88.6267% ( 60) 00:09:26.887 17792.103 - 17897.382: 89.2538% ( 59) 00:09:26.887 17897.382 - 18002.660: 90.0404% ( 74) 00:09:26.887 18002.660 - 18107.939: 90.9545% ( 86) 00:09:26.887 18107.939 - 18213.218: 91.9005% ( 89) 00:09:26.887 18213.218 - 18318.496: 92.7083% ( 76) 00:09:26.887 18318.496 - 18423.775: 93.6862% ( 92) 00:09:26.887 18423.775 - 18529.054: 94.4728% ( 74) 00:09:26.887 18529.054 - 18634.333: 95.0043% ( 50) 00:09:26.887 18634.333 - 18739.611: 95.5251% ( 49) 00:09:26.887 18739.611 - 18844.890: 96.0778% ( 52) 00:09:26.887 18844.890 - 18950.169: 96.4711% ( 37) 00:09:26.887 18950.169 - 19055.447: 96.7368% ( 25) 00:09:26.887 19055.447 - 19160.726: 96.9388% ( 19) 00:09:26.887 19160.726 - 19266.005: 97.0770% ( 13) 00:09:26.887 19266.005 - 19371.284: 97.1514% ( 7) 00:09:26.887 19371.284 - 19476.562: 97.2045% ( 5) 00:09:26.887 19476.562 - 19581.841: 97.2470% ( 4) 00:09:26.887 19581.841 - 19687.120: 97.2789% ( 3) 00:09:26.887 20002.956 - 20108.235: 97.3108% ( 3) 00:09:26.887 20108.235 - 20213.513: 97.3958% ( 8) 00:09:26.887 20213.513 - 20318.792: 97.4702% ( 7) 00:09:26.887 20318.792 - 20424.071: 97.6935% ( 21) 00:09:26.887 20424.071 - 20529.349: 98.0442% ( 33) 00:09:26.887 20529.349 - 20634.628: 98.3525% ( 29) 00:09:26.887 20634.628 - 20739.907: 98.5119% ( 15) 00:09:26.887 20739.907 - 20845.186: 98.5863% ( 7) 00:09:26.887 20845.186 - 20950.464: 98.6395% ( 5) 00:09:26.887 28635.810 - 28846.368: 98.6926% ( 5) 00:09:26.887 28846.368 - 29056.925: 98.7457% ( 5) 00:09:26.887 29056.925 - 29267.483: 98.8095% ( 6) 00:09:26.887 29267.483 - 29478.040: 98.8627% ( 5) 00:09:26.887 29478.040 - 29688.598: 98.9158% ( 5) 00:09:26.887 29688.598 - 29899.155: 98.9690% ( 5) 00:09:26.887 29899.155 - 30109.712: 99.0221% ( 5) 00:09:26.887 30109.712 - 30320.270: 99.0753% ( 5) 00:09:26.887 30320.270 - 30530.827: 99.1284% ( 5) 00:09:26.887 30530.827 - 30741.385: 99.1815% ( 5) 00:09:26.887 30741.385 - 30951.942: 99.2347% ( 5) 00:09:26.887 30951.942 - 31162.500: 99.2878% ( 5) 00:09:26.887 31162.500 - 31373.057: 99.3197% ( 3) 00:09:26.887 38321.452 - 38532.010: 99.3622% ( 4) 00:09:26.887 38532.010 - 38742.567: 99.4260% ( 6) 00:09:26.887 38742.567 - 38953.124: 99.4898% ( 6) 00:09:26.887 38953.124 - 39163.682: 99.5429% ( 5) 00:09:26.887 39163.682 - 39374.239: 99.6067% ( 6) 00:09:26.887 39374.239 - 39584.797: 99.6599% ( 5) 00:09:26.887 39584.797 - 39795.354: 99.7236% ( 6) 00:09:26.887 39795.354 - 40005.912: 99.7874% ( 6) 00:09:26.887 40005.912 - 40216.469: 99.8512% ( 6) 00:09:26.887 40216.469 - 40427.027: 99.9150% ( 6) 00:09:26.887 40427.027 - 40637.584: 99.9681% ( 5) 00:09:26.887 40637.584 - 40848.141: 100.0000% ( 3) 00:09:26.887 00:09:26.887 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:26.887 ============================================================================== 00:09:26.887 Range in us Cumulative IO count 00:09:26.887 8843.412 - 8896.051: 0.0633% ( 6) 00:09:26.887 8896.051 - 8948.691: 0.1056% ( 4) 00:09:26.887 8948.691 - 9001.330: 0.2111% ( 10) 00:09:26.887 9001.330 - 9053.969: 0.3695% ( 15) 00:09:26.887 9053.969 - 9106.609: 0.6018% ( 22) 00:09:26.887 9106.609 - 9159.248: 0.8657% ( 25) 00:09:26.887 9159.248 - 9211.888: 1.2880% ( 40) 00:09:26.887 9211.888 - 9264.527: 1.5097% ( 21) 00:09:26.887 9264.527 - 9317.166: 1.6364% ( 12) 00:09:26.887 9317.166 - 9369.806: 1.8476% ( 20) 00:09:26.887 9369.806 - 9422.445: 2.2065% ( 34) 00:09:26.887 9422.445 - 9475.084: 2.5549% ( 33) 00:09:26.887 9475.084 - 9527.724: 2.9455% ( 37) 00:09:26.887 9527.724 - 9580.363: 3.3150% ( 35) 00:09:26.887 9580.363 - 9633.002: 3.9590% ( 61) 00:09:26.887 9633.002 - 9685.642: 4.6769% ( 68) 00:09:26.887 9685.642 - 9738.281: 4.9620% ( 27) 00:09:26.887 9738.281 - 9790.920: 5.2154% ( 24) 00:09:26.887 9790.920 - 9843.560: 5.4582% ( 23) 00:09:26.887 9843.560 - 9896.199: 5.7960% ( 32) 00:09:26.887 9896.199 - 9948.839: 6.0705% ( 26) 00:09:26.887 9948.839 - 10001.478: 6.3556% ( 27) 00:09:26.887 10001.478 - 10054.117: 6.7462% ( 37) 00:09:26.887 10054.117 - 10106.757: 7.0840% ( 32) 00:09:26.887 10106.757 - 10159.396: 7.4113% ( 31) 00:09:26.887 10159.396 - 10212.035: 7.6964% ( 27) 00:09:26.887 10212.035 - 10264.675: 8.0764% ( 36) 00:09:26.887 10264.675 - 10317.314: 8.7310% ( 62) 00:09:26.887 10317.314 - 10369.953: 9.1955% ( 44) 00:09:26.887 10369.953 - 10422.593: 9.6284% ( 41) 00:09:26.887 10422.593 - 10475.232: 10.1457% ( 49) 00:09:26.887 10475.232 - 10527.871: 10.7158% ( 54) 00:09:26.887 10527.871 - 10580.511: 11.2226% ( 48) 00:09:26.887 10580.511 - 10633.150: 11.8349% ( 58) 00:09:26.887 10633.150 - 10685.790: 12.5739% ( 70) 00:09:26.887 10685.790 - 10738.429: 13.4185% ( 80) 00:09:26.887 10738.429 - 10791.068: 14.4848% ( 101) 00:09:26.887 10791.068 - 10843.708: 15.9840% ( 142) 00:09:26.887 10843.708 - 10896.347: 17.4620% ( 140) 00:09:26.887 10896.347 - 10948.986: 18.8028% ( 127) 00:09:26.887 10948.986 - 11001.626: 20.4497% ( 156) 00:09:26.887 11001.626 - 11054.265: 22.1178% ( 158) 00:09:26.887 11054.265 - 11106.904: 23.9759% ( 176) 00:09:26.887 11106.904 - 11159.544: 25.6651% ( 160) 00:09:26.887 11159.544 - 11212.183: 27.7133% ( 194) 00:09:26.887 11212.183 - 11264.822: 29.6558% ( 184) 00:09:26.887 11264.822 - 11317.462: 31.5245% ( 177) 00:09:26.887 11317.462 - 11370.101: 33.3404% ( 172) 00:09:26.887 11370.101 - 11422.741: 34.8923% ( 147) 00:09:26.887 11422.741 - 11475.380: 36.0220% ( 107) 00:09:26.887 11475.380 - 11528.019: 37.0355% ( 96) 00:09:26.887 11528.019 - 11580.659: 38.0490% ( 96) 00:09:26.887 11580.659 - 11633.298: 39.1153% ( 101) 00:09:26.887 11633.298 - 11685.937: 40.1710% ( 100) 00:09:26.887 11685.937 - 11738.577: 40.9101% ( 70) 00:09:26.887 11738.577 - 11791.216: 41.4696% ( 53) 00:09:26.887 11791.216 - 11843.855: 42.1875% ( 68) 00:09:26.887 11843.855 - 11896.495: 42.7365% ( 52) 00:09:26.887 11896.495 - 11949.134: 43.4755% ( 70) 00:09:26.887 11949.134 - 12001.773: 44.0562% ( 55) 00:09:26.887 12001.773 - 12054.413: 44.6157% ( 53) 00:09:26.887 12054.413 - 12107.052: 45.0591% ( 42) 00:09:26.887 12107.052 - 12159.692: 45.7348% ( 64) 00:09:26.887 12159.692 - 12212.331: 46.2627% ( 50) 00:09:26.887 12212.331 - 12264.970: 46.7905% ( 50) 00:09:26.887 12264.970 - 12317.610: 47.4134% ( 59) 00:09:26.887 12317.610 - 12370.249: 48.2264% ( 77) 00:09:26.887 12370.249 - 12422.888: 48.9865% ( 72) 00:09:26.887 12422.888 - 12475.528: 49.6833% ( 66) 00:09:26.887 12475.528 - 12528.167: 50.4329% ( 71) 00:09:26.887 12528.167 - 12580.806: 51.2352% ( 76) 00:09:26.887 12580.806 - 12633.446: 51.8476% ( 58) 00:09:26.887 12633.446 - 12686.085: 52.8083% ( 91) 00:09:26.887 12686.085 - 12738.724: 53.5895% ( 74) 00:09:26.887 12738.724 - 12791.364: 54.3391% ( 71) 00:09:26.887 12791.364 - 12844.003: 55.0465% ( 67) 00:09:26.887 12844.003 - 12896.643: 55.5110% ( 44) 00:09:26.887 12896.643 - 12949.282: 56.0811% ( 54) 00:09:26.887 12949.282 - 13001.921: 56.3872% ( 29) 00:09:26.887 13001.921 - 13054.561: 56.7462% ( 34) 00:09:26.887 13054.561 - 13107.200: 57.0524% ( 29) 00:09:26.887 13107.200 - 13159.839: 57.3163% ( 25) 00:09:26.887 13159.839 - 13212.479: 57.5169% ( 19) 00:09:26.887 13212.479 - 13265.118: 57.8547% ( 32) 00:09:26.887 13265.118 - 13317.757: 58.1081% ( 24) 00:09:26.887 13317.757 - 13370.397: 58.3720% ( 25) 00:09:26.887 13370.397 - 13423.036: 58.6465% ( 26) 00:09:26.887 13423.036 - 13475.676: 58.8366% ( 18) 00:09:26.887 13475.676 - 13580.954: 59.3644% ( 50) 00:09:26.887 13580.954 - 13686.233: 59.9662% ( 57) 00:09:26.887 13686.233 - 13791.512: 61.4970% ( 145) 00:09:26.887 13791.512 - 13896.790: 62.2994% ( 76) 00:09:26.887 13896.790 - 14002.069: 63.0807% ( 74) 00:09:26.887 14002.069 - 14107.348: 64.1258% ( 99) 00:09:26.887 14107.348 - 14212.627: 64.8121% ( 65) 00:09:26.887 14212.627 - 14317.905: 65.5722% ( 72) 00:09:26.887 14317.905 - 14423.184: 66.2162% ( 61) 00:09:26.887 14423.184 - 14528.463: 66.7863% ( 54) 00:09:26.887 14528.463 - 14633.741: 67.4726% ( 65) 00:09:26.887 14633.741 - 14739.020: 68.3699% ( 85) 00:09:26.887 14739.020 - 14844.299: 69.0245% ( 62) 00:09:26.887 14844.299 - 14949.578: 69.8691% ( 80) 00:09:26.887 14949.578 - 15054.856: 70.8404% ( 92) 00:09:26.887 15054.856 - 15160.135: 71.5899% ( 71) 00:09:26.887 15160.135 - 15265.414: 72.1389% ( 52) 00:09:26.887 15265.414 - 15370.692: 72.7513% ( 58) 00:09:26.887 15370.692 - 15475.971: 73.2264% ( 45) 00:09:26.887 15475.971 - 15581.250: 73.8492% ( 59) 00:09:26.887 15581.250 - 15686.529: 74.7677% ( 87) 00:09:26.887 15686.529 - 15791.807: 76.0557% ( 122) 00:09:26.887 15791.807 - 15897.086: 76.9426% ( 84) 00:09:26.887 15897.086 - 16002.365: 78.1144% ( 111) 00:09:26.887 16002.365 - 16107.643: 78.8957% ( 74) 00:09:26.887 16107.643 - 16212.922: 79.7192% ( 78) 00:09:26.887 16212.922 - 16318.201: 80.4371% ( 68) 00:09:26.887 16318.201 - 16423.480: 80.9649% ( 50) 00:09:26.888 16423.480 - 16528.758: 81.4928% ( 50) 00:09:26.888 16528.758 - 16634.037: 81.8729% ( 36) 00:09:26.888 16634.037 - 16739.316: 82.5063% ( 60) 00:09:26.888 16739.316 - 16844.594: 83.1292% ( 59) 00:09:26.888 16844.594 - 16949.873: 83.8155% ( 65) 00:09:26.888 16949.873 - 17055.152: 84.3961% ( 55) 00:09:26.888 17055.152 - 17160.431: 85.2935% ( 85) 00:09:26.888 17160.431 - 17265.709: 86.1592% ( 82) 00:09:26.888 17265.709 - 17370.988: 86.9932% ( 79) 00:09:26.888 17370.988 - 17476.267: 87.7639% ( 73) 00:09:26.888 17476.267 - 17581.545: 88.6402% ( 83) 00:09:26.888 17581.545 - 17686.824: 89.2842% ( 61) 00:09:26.888 17686.824 - 17792.103: 89.8860% ( 57) 00:09:26.888 17792.103 - 17897.382: 90.4878% ( 57) 00:09:26.888 17897.382 - 18002.660: 91.0473% ( 53) 00:09:26.888 18002.660 - 18107.939: 91.7969% ( 71) 00:09:26.888 18107.939 - 18213.218: 92.3881% ( 56) 00:09:26.888 18213.218 - 18318.496: 92.9160% ( 50) 00:09:26.888 18318.496 - 18423.775: 93.4016% ( 46) 00:09:26.888 18423.775 - 18529.054: 94.0351% ( 60) 00:09:26.888 18529.054 - 18634.333: 94.7741% ( 70) 00:09:26.888 18634.333 - 18739.611: 95.3653% ( 56) 00:09:26.888 18739.611 - 18844.890: 96.0726% ( 67) 00:09:26.888 18844.890 - 18950.169: 96.6955% ( 59) 00:09:26.888 18950.169 - 19055.447: 97.0545% ( 34) 00:09:26.888 19055.447 - 19160.726: 97.1812% ( 12) 00:09:26.888 19160.726 - 19266.005: 97.2656% ( 8) 00:09:26.888 19266.005 - 19371.284: 97.2973% ( 3) 00:09:26.888 19897.677 - 20002.956: 97.3290% ( 3) 00:09:26.888 20002.956 - 20108.235: 97.3923% ( 6) 00:09:26.888 20108.235 - 20213.513: 97.4768% ( 8) 00:09:26.888 20213.513 - 20318.792: 97.5507% ( 7) 00:09:26.888 20318.792 - 20424.071: 97.6351% ( 8) 00:09:26.888 20424.071 - 20529.349: 97.7302% ( 9) 00:09:26.888 20529.349 - 20634.628: 97.8146% ( 8) 00:09:26.888 20634.628 - 20739.907: 97.8991% ( 8) 00:09:26.888 20739.907 - 20845.186: 98.0469% ( 14) 00:09:26.888 20845.186 - 20950.464: 98.2052% ( 15) 00:09:26.888 20950.464 - 21055.743: 98.4058% ( 19) 00:09:26.888 21055.743 - 21161.022: 98.7226% ( 30) 00:09:26.888 21161.022 - 21266.300: 98.8704% ( 14) 00:09:26.888 21266.300 - 21371.579: 99.0182% ( 14) 00:09:26.888 21371.579 - 21476.858: 99.1448% ( 12) 00:09:26.888 21476.858 - 21582.137: 99.2293% ( 8) 00:09:26.888 21582.137 - 21687.415: 99.3243% ( 9) 00:09:26.888 28635.810 - 28846.368: 99.3666% ( 4) 00:09:26.888 28846.368 - 29056.925: 99.4193% ( 5) 00:09:26.888 29056.925 - 29267.483: 99.4827% ( 6) 00:09:26.888 29267.483 - 29478.040: 99.5460% ( 6) 00:09:26.888 29478.040 - 29688.598: 99.6094% ( 6) 00:09:26.888 29688.598 - 29899.155: 99.6622% ( 5) 00:09:26.888 29899.155 - 30109.712: 99.7149% ( 5) 00:09:26.888 30109.712 - 30320.270: 99.7677% ( 5) 00:09:26.888 30320.270 - 30530.827: 99.8311% ( 6) 00:09:26.888 30530.827 - 30741.385: 99.8944% ( 6) 00:09:26.888 30741.385 - 30951.942: 99.9472% ( 5) 00:09:26.888 30951.942 - 31162.500: 100.0000% ( 5) 00:09:26.888 00:09:26.888 04:33:16 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:26.888 00:09:26.888 real 0m2.703s 00:09:26.888 user 0m2.228s 00:09:26.888 sys 0m0.346s 00:09:26.888 04:33:16 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.888 04:33:16 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:26.888 ************************************ 00:09:26.888 END TEST nvme_perf 00:09:26.888 ************************************ 00:09:26.888 04:33:16 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:26.888 04:33:16 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:26.888 04:33:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.888 04:33:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.888 ************************************ 00:09:26.888 START TEST nvme_hello_world 00:09:26.888 ************************************ 00:09:26.888 04:33:16 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:27.147 Initializing NVMe Controllers 00:09:27.147 Attached to 0000:00:10.0 00:09:27.147 Namespace ID: 1 size: 6GB 00:09:27.147 Attached to 0000:00:11.0 00:09:27.147 Namespace ID: 1 size: 5GB 00:09:27.147 Attached to 0000:00:13.0 00:09:27.147 Namespace ID: 1 size: 1GB 00:09:27.147 Attached to 0000:00:12.0 00:09:27.147 Namespace ID: 1 size: 4GB 00:09:27.147 Namespace ID: 2 size: 4GB 00:09:27.147 Namespace ID: 3 size: 4GB 00:09:27.147 Initialization complete. 00:09:27.147 INFO: using host memory buffer for IO 00:09:27.147 Hello world! 00:09:27.147 INFO: using host memory buffer for IO 00:09:27.147 Hello world! 00:09:27.147 INFO: using host memory buffer for IO 00:09:27.147 Hello world! 00:09:27.147 INFO: using host memory buffer for IO 00:09:27.147 Hello world! 00:09:27.147 INFO: using host memory buffer for IO 00:09:27.147 Hello world! 00:09:27.147 INFO: using host memory buffer for IO 00:09:27.147 Hello world! 00:09:27.147 00:09:27.147 real 0m0.289s 00:09:27.147 user 0m0.105s 00:09:27.147 sys 0m0.133s 00:09:27.147 04:33:16 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.147 04:33:16 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:27.147 ************************************ 00:09:27.147 END TEST nvme_hello_world 00:09:27.147 ************************************ 00:09:27.147 04:33:16 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:27.147 04:33:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.147 04:33:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.147 04:33:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.147 ************************************ 00:09:27.147 START TEST nvme_sgl 00:09:27.147 ************************************ 00:09:27.147 04:33:16 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:27.406 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:27.406 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:27.406 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:27.406 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:27.406 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:27.406 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:27.406 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:27.406 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:27.406 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:27.406 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:27.406 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:27.406 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:27.406 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:27.406 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:27.406 NVMe Readv/Writev Request test 00:09:27.406 Attached to 0000:00:10.0 00:09:27.406 Attached to 0000:00:11.0 00:09:27.406 Attached to 0000:00:13.0 00:09:27.406 Attached to 0000:00:12.0 00:09:27.406 0000:00:10.0: build_io_request_2 test passed 00:09:27.406 0000:00:10.0: build_io_request_4 test passed 00:09:27.406 0000:00:10.0: build_io_request_5 test passed 00:09:27.406 0000:00:10.0: build_io_request_6 test passed 00:09:27.406 0000:00:10.0: build_io_request_7 test passed 00:09:27.406 0000:00:10.0: build_io_request_10 test passed 00:09:27.406 0000:00:11.0: build_io_request_2 test passed 00:09:27.406 0000:00:11.0: build_io_request_4 test passed 00:09:27.406 0000:00:11.0: build_io_request_5 test passed 00:09:27.406 0000:00:11.0: build_io_request_6 test passed 00:09:27.406 0000:00:11.0: build_io_request_7 test passed 00:09:27.406 0000:00:11.0: build_io_request_10 test passed 00:09:27.406 Cleaning up... 00:09:27.406 00:09:27.406 real 0m0.341s 00:09:27.406 user 0m0.167s 00:09:27.406 sys 0m0.132s 00:09:27.406 04:33:16 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.406 04:33:16 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:27.406 ************************************ 00:09:27.406 END TEST nvme_sgl 00:09:27.406 ************************************ 00:09:27.406 04:33:16 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:27.406 04:33:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.406 04:33:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.406 04:33:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.406 ************************************ 00:09:27.406 START TEST nvme_e2edp 00:09:27.406 ************************************ 00:09:27.406 04:33:16 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:27.973 NVMe Write/Read with End-to-End data protection test 00:09:27.973 Attached to 0000:00:10.0 00:09:27.973 Attached to 0000:00:11.0 00:09:27.973 Attached to 0000:00:13.0 00:09:27.973 Attached to 0000:00:12.0 00:09:27.973 Cleaning up... 00:09:27.973 00:09:27.973 real 0m0.326s 00:09:27.973 user 0m0.108s 00:09:27.973 sys 0m0.170s 00:09:27.973 04:33:17 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.973 04:33:17 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:27.973 ************************************ 00:09:27.973 END TEST nvme_e2edp 00:09:27.973 ************************************ 00:09:27.973 04:33:17 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:27.973 04:33:17 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.973 04:33:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.973 04:33:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.973 ************************************ 00:09:27.973 START TEST nvme_reserve 00:09:27.973 ************************************ 00:09:27.973 04:33:17 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:28.248 ===================================================== 00:09:28.248 NVMe Controller at PCI bus 0, device 16, function 0 00:09:28.248 ===================================================== 00:09:28.248 Reservations: Not Supported 00:09:28.248 ===================================================== 00:09:28.248 NVMe Controller at PCI bus 0, device 17, function 0 00:09:28.248 ===================================================== 00:09:28.248 Reservations: Not Supported 00:09:28.248 ===================================================== 00:09:28.248 NVMe Controller at PCI bus 0, device 19, function 0 00:09:28.248 ===================================================== 00:09:28.248 Reservations: Not Supported 00:09:28.248 ===================================================== 00:09:28.248 NVMe Controller at PCI bus 0, device 18, function 0 00:09:28.248 ===================================================== 00:09:28.248 Reservations: Not Supported 00:09:28.248 Reservation test passed 00:09:28.248 00:09:28.248 real 0m0.292s 00:09:28.248 user 0m0.100s 00:09:28.248 sys 0m0.139s 00:09:28.248 04:33:17 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.248 04:33:17 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:28.248 ************************************ 00:09:28.248 END TEST nvme_reserve 00:09:28.248 ************************************ 00:09:28.248 04:33:17 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:28.248 04:33:17 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:28.248 04:33:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.248 04:33:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.248 ************************************ 00:09:28.248 START TEST nvme_err_injection 00:09:28.248 ************************************ 00:09:28.248 04:33:17 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:28.511 NVMe Error Injection test 00:09:28.511 Attached to 0000:00:10.0 00:09:28.511 Attached to 0000:00:11.0 00:09:28.511 Attached to 0000:00:13.0 00:09:28.511 Attached to 0000:00:12.0 00:09:28.511 0000:00:13.0: get features failed as expected 00:09:28.511 0000:00:12.0: get features failed as expected 00:09:28.511 0000:00:10.0: get features failed as expected 00:09:28.511 0000:00:11.0: get features failed as expected 00:09:28.511 0000:00:10.0: get features successfully as expected 00:09:28.511 0000:00:11.0: get features successfully as expected 00:09:28.511 0000:00:13.0: get features successfully as expected 00:09:28.511 0000:00:12.0: get features successfully as expected 00:09:28.511 0000:00:10.0: read failed as expected 00:09:28.511 0000:00:11.0: read failed as expected 00:09:28.511 0000:00:13.0: read failed as expected 00:09:28.511 0000:00:12.0: read failed as expected 00:09:28.511 0000:00:11.0: read successfully as expected 00:09:28.511 0000:00:13.0: read successfully as expected 00:09:28.511 0000:00:12.0: read successfully as expected 00:09:28.511 0000:00:10.0: read successfully as expected 00:09:28.511 Cleaning up... 00:09:28.511 00:09:28.511 real 0m0.305s 00:09:28.511 user 0m0.106s 00:09:28.511 sys 0m0.152s 00:09:28.511 04:33:17 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.511 04:33:17 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:28.511 ************************************ 00:09:28.511 END TEST nvme_err_injection 00:09:28.511 ************************************ 00:09:28.769 04:33:18 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:28.770 04:33:18 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:28.770 04:33:18 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.770 04:33:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.770 ************************************ 00:09:28.770 START TEST nvme_overhead 00:09:28.770 ************************************ 00:09:28.770 04:33:18 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:30.147 Initializing NVMe Controllers 00:09:30.147 Attached to 0000:00:10.0 00:09:30.147 Attached to 0000:00:11.0 00:09:30.147 Attached to 0000:00:13.0 00:09:30.147 Attached to 0000:00:12.0 00:09:30.147 Initialization complete. Launching workers. 00:09:30.147 submit (in ns) avg, min, max = 13943.9, 11130.9, 109905.2 00:09:30.147 complete (in ns) avg, min, max = 8784.7, 7767.9, 117344.6 00:09:30.147 00:09:30.147 Submit histogram 00:09:30.147 ================ 00:09:30.147 Range in us Cumulative Count 00:09:30.147 11.104 - 11.155: 0.0141% ( 1) 00:09:30.147 11.618 - 11.669: 0.0282% ( 1) 00:09:30.147 12.337 - 12.389: 0.0423% ( 1) 00:09:30.147 12.389 - 12.440: 0.1553% ( 8) 00:09:30.147 12.440 - 12.492: 0.6211% ( 33) 00:09:30.147 12.492 - 12.543: 1.3975% ( 55) 00:09:30.147 12.543 - 12.594: 2.6539% ( 89) 00:09:30.147 12.594 - 12.646: 4.1361% ( 105) 00:09:30.147 12.646 - 12.697: 5.9006% ( 125) 00:09:30.147 12.697 - 12.749: 8.2016% ( 163) 00:09:30.147 12.749 - 12.800: 11.4342% ( 229) 00:09:30.147 12.800 - 12.851: 15.5421% ( 291) 00:09:30.147 12.851 - 12.903: 20.4263% ( 346) 00:09:30.147 12.903 - 12.954: 26.0446% ( 398) 00:09:30.147 12.954 - 13.006: 31.5641% ( 391) 00:09:30.147 13.006 - 13.057: 37.6059% ( 428) 00:09:30.147 13.057 - 13.108: 43.5065% ( 418) 00:09:30.147 13.108 - 13.160: 49.0683% ( 394) 00:09:30.147 13.160 - 13.263: 59.0909% ( 710) 00:09:30.147 13.263 - 13.365: 66.8408% ( 549) 00:09:30.147 13.365 - 13.468: 73.6166% ( 480) 00:09:30.147 13.468 - 13.571: 78.1056% ( 318) 00:09:30.147 13.571 - 13.674: 81.4794% ( 239) 00:09:30.147 13.674 - 13.777: 83.6815% ( 156) 00:09:30.147 13.777 - 13.880: 85.0649% ( 98) 00:09:30.147 13.880 - 13.982: 86.1801% ( 79) 00:09:30.147 13.982 - 14.085: 86.9848% ( 57) 00:09:30.147 14.085 - 14.188: 87.3235% ( 24) 00:09:30.147 14.188 - 14.291: 87.7047% ( 27) 00:09:30.147 14.291 - 14.394: 87.9588% ( 18) 00:09:30.147 14.394 - 14.496: 88.1282% ( 12) 00:09:30.147 14.496 - 14.599: 88.2976% ( 12) 00:09:30.147 14.599 - 14.702: 88.3399% ( 3) 00:09:30.147 14.702 - 14.805: 88.4105% ( 5) 00:09:30.147 14.805 - 14.908: 88.4529% ( 3) 00:09:30.147 14.908 - 15.010: 88.5093% ( 4) 00:09:30.147 15.010 - 15.113: 88.5658% ( 4) 00:09:30.147 15.113 - 15.216: 88.5799% ( 1) 00:09:30.147 15.216 - 15.319: 88.6364% ( 4) 00:09:30.147 15.319 - 15.422: 88.6646% ( 2) 00:09:30.147 15.422 - 15.524: 88.7211% ( 4) 00:09:30.147 15.524 - 15.627: 88.7634% ( 3) 00:09:30.147 15.627 - 15.730: 88.8199% ( 4) 00:09:30.147 15.833 - 15.936: 88.8340% ( 1) 00:09:30.147 15.936 - 16.039: 88.8905% ( 4) 00:09:30.147 16.039 - 16.141: 88.9187% ( 2) 00:09:30.147 16.141 - 16.244: 88.9469% ( 2) 00:09:30.147 16.244 - 16.347: 88.9752% ( 2) 00:09:30.147 16.347 - 16.450: 89.0175% ( 3) 00:09:30.147 16.450 - 16.553: 89.2151% ( 14) 00:09:30.147 16.553 - 16.655: 89.2998% ( 6) 00:09:30.147 16.655 - 16.758: 89.4410% ( 10) 00:09:30.147 16.758 - 16.861: 89.5822% ( 10) 00:09:30.147 16.861 - 16.964: 89.7374% ( 11) 00:09:30.147 16.964 - 17.067: 89.9492% ( 15) 00:09:30.147 17.067 - 17.169: 90.2456% ( 21) 00:09:30.147 17.169 - 17.272: 90.4997% ( 18) 00:09:30.147 17.272 - 17.375: 90.7397% ( 17) 00:09:30.147 17.375 - 17.478: 90.9514% ( 15) 00:09:30.147 17.478 - 17.581: 91.0220% ( 5) 00:09:30.147 17.581 - 17.684: 91.1491% ( 9) 00:09:30.147 17.684 - 17.786: 91.2761% ( 9) 00:09:30.147 17.786 - 17.889: 91.3890% ( 8) 00:09:30.147 17.889 - 17.992: 91.4879% ( 7) 00:09:30.147 17.992 - 18.095: 91.6431% ( 11) 00:09:30.147 18.095 - 18.198: 91.7984% ( 11) 00:09:30.147 18.198 - 18.300: 92.0666% ( 19) 00:09:30.147 18.300 - 18.403: 92.3772% ( 22) 00:09:30.147 18.403 - 18.506: 92.6595% ( 20) 00:09:30.147 18.506 - 18.609: 92.8854% ( 16) 00:09:30.147 18.609 - 18.712: 93.1818% ( 21) 00:09:30.147 18.712 - 18.814: 93.4924% ( 22) 00:09:30.147 18.814 - 18.917: 94.0006% ( 36) 00:09:30.147 18.917 - 19.020: 94.3817% ( 27) 00:09:30.147 19.020 - 19.123: 94.8617% ( 34) 00:09:30.147 19.123 - 19.226: 95.1863% ( 23) 00:09:30.147 19.226 - 19.329: 95.6522% ( 33) 00:09:30.147 19.329 - 19.431: 96.0192% ( 26) 00:09:30.147 19.431 - 19.534: 96.2733% ( 18) 00:09:30.147 19.534 - 19.637: 96.6403% ( 26) 00:09:30.147 19.637 - 19.740: 96.9085% ( 19) 00:09:30.147 19.740 - 19.843: 97.1344% ( 16) 00:09:30.147 19.843 - 19.945: 97.3038% ( 12) 00:09:30.147 19.945 - 20.048: 97.5720% ( 19) 00:09:30.147 20.048 - 20.151: 97.6990% ( 9) 00:09:30.147 20.151 - 20.254: 97.8120% ( 8) 00:09:30.147 20.254 - 20.357: 97.9108% ( 7) 00:09:30.147 20.357 - 20.459: 98.0519% ( 10) 00:09:30.147 20.459 - 20.562: 98.1790% ( 9) 00:09:30.147 20.562 - 20.665: 98.2778% ( 7) 00:09:30.147 20.665 - 20.768: 98.3343% ( 4) 00:09:30.147 20.768 - 20.871: 98.3625% ( 2) 00:09:30.147 20.871 - 20.973: 98.3766% ( 1) 00:09:30.147 20.973 - 21.076: 98.4331% ( 4) 00:09:30.147 21.179 - 21.282: 98.4613% ( 2) 00:09:30.147 21.282 - 21.385: 98.4754% ( 1) 00:09:30.147 21.385 - 21.488: 98.5037% ( 2) 00:09:30.147 21.590 - 21.693: 98.5178% ( 1) 00:09:30.147 21.693 - 21.796: 98.5460% ( 2) 00:09:30.147 21.899 - 22.002: 98.5601% ( 1) 00:09:30.147 22.104 - 22.207: 98.6166% ( 4) 00:09:30.147 22.207 - 22.310: 98.6307% ( 1) 00:09:30.147 22.310 - 22.413: 98.6448% ( 1) 00:09:30.147 22.413 - 22.516: 98.6589% ( 1) 00:09:30.147 22.516 - 22.618: 98.6731% ( 1) 00:09:30.147 22.618 - 22.721: 98.6872% ( 1) 00:09:30.147 22.927 - 23.030: 98.7013% ( 1) 00:09:30.147 23.133 - 23.235: 98.7295% ( 2) 00:09:30.147 23.235 - 23.338: 98.7436% ( 1) 00:09:30.147 23.338 - 23.441: 98.7578% ( 1) 00:09:30.147 23.544 - 23.647: 98.7860% ( 2) 00:09:30.147 23.647 - 23.749: 98.8142% ( 2) 00:09:30.147 23.749 - 23.852: 98.8283% ( 1) 00:09:30.147 23.852 - 23.955: 98.8566% ( 2) 00:09:30.147 23.955 - 24.058: 98.8707% ( 1) 00:09:30.147 24.058 - 24.161: 98.8848% ( 1) 00:09:30.147 24.161 - 24.263: 98.8989% ( 1) 00:09:30.147 24.263 - 24.366: 98.9130% ( 1) 00:09:30.147 24.469 - 24.572: 98.9272% ( 1) 00:09:30.147 24.572 - 24.675: 98.9413% ( 1) 00:09:30.147 24.983 - 25.086: 98.9554% ( 1) 00:09:30.147 25.394 - 25.497: 98.9695% ( 1) 00:09:30.147 25.497 - 25.600: 98.9977% ( 2) 00:09:30.147 25.600 - 25.703: 99.0260% ( 2) 00:09:30.147 25.703 - 25.806: 99.0401% ( 1) 00:09:30.147 25.806 - 25.908: 99.0683% ( 2) 00:09:30.147 25.908 - 26.011: 99.0824% ( 1) 00:09:30.147 26.011 - 26.114: 99.0966% ( 1) 00:09:30.147 26.114 - 26.217: 99.1671% ( 5) 00:09:30.147 26.217 - 26.320: 99.2095% ( 3) 00:09:30.147 26.320 - 26.525: 99.2660% ( 4) 00:09:30.147 26.525 - 26.731: 99.2801% ( 1) 00:09:30.147 26.731 - 26.937: 99.3365% ( 4) 00:09:30.147 27.142 - 27.348: 99.3506% ( 1) 00:09:30.148 27.348 - 27.553: 99.3789% ( 2) 00:09:30.148 27.759 - 27.965: 99.4212% ( 3) 00:09:30.148 27.965 - 28.170: 99.4636% ( 3) 00:09:30.148 28.170 - 28.376: 99.4777% ( 1) 00:09:30.148 28.376 - 28.582: 99.5059% ( 2) 00:09:30.148 28.582 - 28.787: 99.5342% ( 2) 00:09:30.148 29.198 - 29.404: 99.5483% ( 1) 00:09:30.148 29.610 - 29.815: 99.5765% ( 2) 00:09:30.148 29.815 - 30.021: 99.5906% ( 1) 00:09:30.148 30.021 - 30.227: 99.6330% ( 3) 00:09:30.148 30.227 - 30.432: 99.6612% ( 2) 00:09:30.148 30.432 - 30.638: 99.6753% ( 1) 00:09:30.148 30.638 - 30.843: 99.7177% ( 3) 00:09:30.148 30.843 - 31.049: 99.7318% ( 1) 00:09:30.148 31.255 - 31.460: 99.7459% ( 1) 00:09:30.148 31.460 - 31.666: 99.7600% ( 1) 00:09:30.148 31.666 - 31.871: 99.7741% ( 1) 00:09:30.148 31.871 - 32.077: 99.7883% ( 1) 00:09:30.148 32.283 - 32.488: 99.8024% ( 1) 00:09:30.148 32.488 - 32.694: 99.8165% ( 1) 00:09:30.148 32.900 - 33.105: 99.8306% ( 1) 00:09:30.148 33.928 - 34.133: 99.8588% ( 2) 00:09:30.148 34.339 - 34.545: 99.8730% ( 1) 00:09:30.148 36.601 - 36.806: 99.8871% ( 1) 00:09:30.148 37.423 - 37.629: 99.9012% ( 1) 00:09:30.148 39.891 - 40.096: 99.9153% ( 1) 00:09:30.148 43.181 - 43.386: 99.9435% ( 2) 00:09:30.148 44.209 - 44.414: 99.9577% ( 1) 00:09:30.148 72.379 - 72.790: 99.9718% ( 1) 00:09:30.148 106.924 - 107.746: 99.9859% ( 1) 00:09:30.148 109.391 - 110.214: 100.0000% ( 1) 00:09:30.148 00:09:30.148 Complete histogram 00:09:30.148 ================== 00:09:30.148 Range in us Cumulative Count 00:09:30.148 7.762 - 7.814: 0.8187% ( 58) 00:09:30.148 7.814 - 7.865: 8.8792% ( 571) 00:09:30.148 7.865 - 7.916: 24.4777% ( 1105) 00:09:30.148 7.916 - 7.968: 37.5776% ( 928) 00:09:30.148 7.968 - 8.019: 45.8780% ( 588) 00:09:30.148 8.019 - 8.071: 50.9599% ( 360) 00:09:30.148 8.071 - 8.122: 54.7572% ( 269) 00:09:30.148 8.122 - 8.173: 57.1146% ( 167) 00:09:30.148 8.173 - 8.225: 58.2298% ( 79) 00:09:30.148 8.225 - 8.276: 59.0486% ( 58) 00:09:30.148 8.276 - 8.328: 60.7990% ( 124) 00:09:30.148 8.328 - 8.379: 62.8317% ( 144) 00:09:30.148 8.379 - 8.431: 64.2292% ( 99) 00:09:30.148 8.431 - 8.482: 65.2033% ( 69) 00:09:30.148 8.482 - 8.533: 66.0503% ( 60) 00:09:30.148 8.533 - 8.585: 67.4901% ( 102) 00:09:30.148 8.585 - 8.636: 70.3416% ( 202) 00:09:30.148 8.636 - 8.688: 72.6002% ( 160) 00:09:30.148 8.688 - 8.739: 74.0824% ( 105) 00:09:30.148 8.739 - 8.790: 75.7623% ( 119) 00:09:30.148 8.790 - 8.842: 77.6115% ( 131) 00:09:30.148 8.842 - 8.893: 79.0796% ( 104) 00:09:30.148 8.893 - 8.945: 80.6042% ( 108) 00:09:30.148 8.945 - 8.996: 81.7617% ( 82) 00:09:30.148 8.996 - 9.047: 82.9475% ( 84) 00:09:30.148 9.047 - 9.099: 84.1474% ( 85) 00:09:30.148 9.099 - 9.150: 85.1920% ( 74) 00:09:30.148 9.150 - 9.202: 85.9825% ( 56) 00:09:30.148 9.202 - 9.253: 86.7589% ( 55) 00:09:30.148 9.253 - 9.304: 87.4365% ( 48) 00:09:30.148 9.304 - 9.356: 87.8600% ( 30) 00:09:30.148 9.356 - 9.407: 88.5940% ( 52) 00:09:30.148 9.407 - 9.459: 89.1446% ( 39) 00:09:30.148 9.459 - 9.510: 89.4975% ( 25) 00:09:30.148 9.510 - 9.561: 89.7374% ( 17) 00:09:30.148 9.561 - 9.613: 89.9209% ( 13) 00:09:30.148 9.613 - 9.664: 90.2033% ( 20) 00:09:30.148 9.664 - 9.716: 90.3444% ( 10) 00:09:30.148 9.716 - 9.767: 90.4715% ( 9) 00:09:30.148 9.767 - 9.818: 90.7115% ( 17) 00:09:30.148 9.818 - 9.870: 90.8809% ( 12) 00:09:30.148 9.870 - 9.921: 91.0079% ( 9) 00:09:30.148 9.921 - 9.973: 91.0644% ( 4) 00:09:30.148 9.973 - 10.024: 91.1208% ( 4) 00:09:30.148 10.024 - 10.076: 91.2338% ( 8) 00:09:30.148 10.076 - 10.127: 91.2620% ( 2) 00:09:30.148 10.127 - 10.178: 91.2761% ( 1) 00:09:30.148 10.230 - 10.281: 91.2902% ( 1) 00:09:30.148 10.281 - 10.333: 91.3043% ( 1) 00:09:30.148 10.333 - 10.384: 91.3185% ( 1) 00:09:30.148 10.384 - 10.435: 91.3326% ( 1) 00:09:30.148 10.435 - 10.487: 91.3467% ( 1) 00:09:30.148 10.487 - 10.538: 91.3890% ( 3) 00:09:30.148 10.590 - 10.641: 91.4173% ( 2) 00:09:30.148 10.641 - 10.692: 91.4455% ( 2) 00:09:30.148 10.692 - 10.744: 91.4737% ( 2) 00:09:30.148 10.744 - 10.795: 91.5302% ( 4) 00:09:30.148 10.795 - 10.847: 91.5443% ( 1) 00:09:30.148 10.847 - 10.898: 91.5726% ( 2) 00:09:30.148 10.898 - 10.949: 91.6431% ( 5) 00:09:30.148 10.949 - 11.001: 91.6573% ( 1) 00:09:30.148 11.001 - 11.052: 91.7137% ( 4) 00:09:30.148 11.052 - 11.104: 91.7561% ( 3) 00:09:30.148 11.155 - 11.206: 91.8267% ( 5) 00:09:30.148 11.206 - 11.258: 91.8408% ( 1) 00:09:30.148 11.258 - 11.309: 91.9255% ( 6) 00:09:30.148 11.309 - 11.361: 92.0525% ( 9) 00:09:30.148 11.361 - 11.412: 92.1654% ( 8) 00:09:30.148 11.412 - 11.463: 92.2501% ( 6) 00:09:30.148 11.463 - 11.515: 92.3348% ( 6) 00:09:30.148 11.515 - 11.566: 92.4619% ( 9) 00:09:30.148 11.566 - 11.618: 92.5889% ( 9) 00:09:30.148 11.618 - 11.669: 92.8713% ( 20) 00:09:30.148 11.669 - 11.720: 93.0124% ( 10) 00:09:30.148 11.720 - 11.772: 93.2242% ( 15) 00:09:30.148 11.772 - 11.823: 93.3936% ( 12) 00:09:30.148 11.823 - 11.875: 93.5771% ( 13) 00:09:30.148 11.875 - 11.926: 93.7747% ( 14) 00:09:30.148 11.926 - 11.978: 93.9864% ( 15) 00:09:30.148 11.978 - 12.029: 94.2547% ( 19) 00:09:30.148 12.029 - 12.080: 94.4382% ( 13) 00:09:30.148 12.080 - 12.132: 94.6499% ( 15) 00:09:30.148 12.132 - 12.183: 94.8475% ( 14) 00:09:30.148 12.183 - 12.235: 95.0593% ( 15) 00:09:30.148 12.235 - 12.286: 95.2287% ( 12) 00:09:30.148 12.286 - 12.337: 95.3275% ( 7) 00:09:30.148 12.337 - 12.389: 95.4263% ( 7) 00:09:30.148 12.389 - 12.440: 95.5816% ( 11) 00:09:30.148 12.440 - 12.492: 95.7651% ( 13) 00:09:30.148 12.492 - 12.543: 95.8498% ( 6) 00:09:30.148 12.543 - 12.594: 95.9345% ( 6) 00:09:30.148 12.594 - 12.646: 96.1321% ( 14) 00:09:30.148 12.646 - 12.697: 96.3721% ( 17) 00:09:30.148 12.697 - 12.749: 96.5697% ( 14) 00:09:30.148 12.749 - 12.800: 96.6544% ( 6) 00:09:30.148 12.800 - 12.851: 96.7815% ( 9) 00:09:30.148 12.851 - 12.903: 96.8944% ( 8) 00:09:30.148 12.903 - 12.954: 96.9932% ( 7) 00:09:30.148 12.954 - 13.006: 97.0356% ( 3) 00:09:30.148 13.006 - 13.057: 97.0920% ( 4) 00:09:30.148 13.057 - 13.108: 97.1485% ( 4) 00:09:30.148 13.108 - 13.160: 97.1626% ( 1) 00:09:30.148 13.160 - 13.263: 97.3885% ( 16) 00:09:30.148 13.263 - 13.365: 97.6567% ( 19) 00:09:30.148 13.365 - 13.468: 97.7696% ( 8) 00:09:30.148 13.468 - 13.571: 97.8543% ( 6) 00:09:30.148 13.571 - 13.674: 97.9955% ( 10) 00:09:30.148 13.674 - 13.777: 98.0661% ( 5) 00:09:30.148 13.777 - 13.880: 98.1225% ( 4) 00:09:30.148 13.880 - 13.982: 98.1649% ( 3) 00:09:30.148 13.982 - 14.085: 98.1931% ( 2) 00:09:30.148 14.085 - 14.188: 98.2496% ( 4) 00:09:30.148 14.188 - 14.291: 98.3202% ( 5) 00:09:30.148 14.291 - 14.394: 98.4190% ( 7) 00:09:30.148 14.394 - 14.496: 98.4331% ( 1) 00:09:30.148 14.496 - 14.599: 98.4613% ( 2) 00:09:30.148 14.599 - 14.702: 98.5178% ( 4) 00:09:30.148 14.702 - 14.805: 98.5601% ( 3) 00:09:30.148 14.805 - 14.908: 98.5743% ( 1) 00:09:30.148 14.908 - 15.010: 98.5884% ( 1) 00:09:30.148 15.010 - 15.113: 98.6025% ( 1) 00:09:30.148 15.216 - 15.319: 98.6166% ( 1) 00:09:30.148 15.319 - 15.422: 98.6448% ( 2) 00:09:30.148 15.422 - 15.524: 98.6731% ( 2) 00:09:30.148 15.524 - 15.627: 98.6872% ( 1) 00:09:30.148 15.627 - 15.730: 98.7013% ( 1) 00:09:30.148 16.244 - 16.347: 98.7154% ( 1) 00:09:30.148 16.655 - 16.758: 98.7436% ( 2) 00:09:30.148 16.861 - 16.964: 98.7578% ( 1) 00:09:30.148 17.169 - 17.272: 98.7719% ( 1) 00:09:30.148 17.272 - 17.375: 98.8001% ( 2) 00:09:30.148 17.478 - 17.581: 98.8142% ( 1) 00:09:30.148 17.786 - 17.889: 98.8425% ( 2) 00:09:30.148 17.992 - 18.095: 98.8566% ( 1) 00:09:30.148 18.506 - 18.609: 98.8707% ( 1) 00:09:30.148 18.609 - 18.712: 98.8848% ( 1) 00:09:30.148 18.917 - 19.020: 98.9130% ( 2) 00:09:30.148 19.123 - 19.226: 98.9413% ( 2) 00:09:30.148 19.534 - 19.637: 98.9836% ( 3) 00:09:30.148 19.843 - 19.945: 98.9977% ( 1) 00:09:30.148 19.945 - 20.048: 99.0119% ( 1) 00:09:30.148 20.151 - 20.254: 99.0260% ( 1) 00:09:30.148 20.254 - 20.357: 99.0401% ( 1) 00:09:30.148 20.459 - 20.562: 99.0683% ( 2) 00:09:30.148 20.562 - 20.665: 99.0824% ( 1) 00:09:30.148 20.665 - 20.768: 99.0966% ( 1) 00:09:30.148 20.768 - 20.871: 99.1530% ( 4) 00:09:30.148 20.871 - 20.973: 99.2236% ( 5) 00:09:30.149 20.973 - 21.076: 99.2377% ( 1) 00:09:30.149 21.076 - 21.179: 99.2660% ( 2) 00:09:30.149 21.282 - 21.385: 99.3365% ( 5) 00:09:30.149 21.385 - 21.488: 99.3648% ( 2) 00:09:30.149 21.488 - 21.590: 99.3930% ( 2) 00:09:30.149 21.590 - 21.693: 99.4212% ( 2) 00:09:30.149 21.693 - 21.796: 99.4353% ( 1) 00:09:30.149 21.899 - 22.002: 99.4495% ( 1) 00:09:30.149 22.002 - 22.104: 99.4918% ( 3) 00:09:30.149 22.310 - 22.413: 99.5342% ( 3) 00:09:30.149 22.721 - 22.824: 99.5483% ( 1) 00:09:30.149 22.824 - 22.927: 99.5624% ( 1) 00:09:30.149 23.955 - 24.058: 99.5765% ( 1) 00:09:30.149 24.058 - 24.161: 99.5906% ( 1) 00:09:30.149 24.572 - 24.675: 99.6189% ( 2) 00:09:30.149 24.675 - 24.778: 99.6330% ( 1) 00:09:30.149 24.778 - 24.880: 99.6471% ( 1) 00:09:30.149 24.880 - 24.983: 99.6612% ( 1) 00:09:30.149 25.086 - 25.189: 99.6753% ( 1) 00:09:30.149 25.189 - 25.292: 99.6894% ( 1) 00:09:30.149 25.292 - 25.394: 99.7036% ( 1) 00:09:30.149 25.394 - 25.497: 99.7177% ( 1) 00:09:30.149 25.497 - 25.600: 99.7318% ( 1) 00:09:30.149 25.703 - 25.806: 99.7600% ( 2) 00:09:30.149 25.908 - 26.011: 99.7741% ( 1) 00:09:30.149 26.114 - 26.217: 99.7883% ( 1) 00:09:30.149 26.217 - 26.320: 99.8024% ( 1) 00:09:30.149 26.525 - 26.731: 99.8306% ( 2) 00:09:30.149 28.582 - 28.787: 99.8447% ( 1) 00:09:30.149 29.815 - 30.021: 99.8588% ( 1) 00:09:30.149 30.021 - 30.227: 99.8730% ( 1) 00:09:30.149 30.638 - 30.843: 99.8871% ( 1) 00:09:30.149 31.871 - 32.077: 99.9012% ( 1) 00:09:30.149 34.339 - 34.545: 99.9153% ( 1) 00:09:30.149 35.161 - 35.367: 99.9294% ( 1) 00:09:30.149 38.451 - 38.657: 99.9435% ( 1) 00:09:30.149 47.293 - 47.499: 99.9577% ( 1) 00:09:30.149 59.631 - 60.042: 99.9718% ( 1) 00:09:30.149 72.379 - 72.790: 99.9859% ( 1) 00:09:30.149 116.794 - 117.616: 100.0000% ( 1) 00:09:30.149 00:09:30.149 00:09:30.149 real 0m1.289s 00:09:30.149 user 0m1.091s 00:09:30.149 sys 0m0.152s 00:09:30.149 04:33:19 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.149 04:33:19 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:30.149 ************************************ 00:09:30.149 END TEST nvme_overhead 00:09:30.149 ************************************ 00:09:30.149 04:33:19 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:30.149 04:33:19 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:30.149 04:33:19 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.149 04:33:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:30.149 ************************************ 00:09:30.149 START TEST nvme_arbitration 00:09:30.149 ************************************ 00:09:30.149 04:33:19 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:33.437 Initializing NVMe Controllers 00:09:33.437 Attached to 0000:00:10.0 00:09:33.437 Attached to 0000:00:11.0 00:09:33.437 Attached to 0000:00:13.0 00:09:33.437 Attached to 0000:00:12.0 00:09:33.437 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:33.437 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:33.437 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:33.437 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:33.437 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:33.437 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:33.437 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:33.437 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:33.437 Initialization complete. Launching workers. 00:09:33.437 Starting thread on core 1 with urgent priority queue 00:09:33.437 Starting thread on core 2 with urgent priority queue 00:09:33.437 Starting thread on core 3 with urgent priority queue 00:09:33.437 Starting thread on core 0 with urgent priority queue 00:09:33.437 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:09:33.437 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:09:33.437 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:09:33.437 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:09:33.437 QEMU NVMe Ctrl (12343 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:09:33.437 QEMU NVMe Ctrl (12342 ) core 3: 533.33 IO/s 187.50 secs/100000 ios 00:09:33.437 ======================================================== 00:09:33.437 00:09:33.437 00:09:33.437 real 0m3.442s 00:09:33.437 user 0m9.454s 00:09:33.437 sys 0m0.159s 00:09:33.437 04:33:22 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.437 04:33:22 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:33.437 ************************************ 00:09:33.437 END TEST nvme_arbitration 00:09:33.437 ************************************ 00:09:33.437 04:33:22 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:33.437 04:33:22 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:33.437 04:33:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.437 04:33:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.437 ************************************ 00:09:33.437 START TEST nvme_single_aen 00:09:33.437 ************************************ 00:09:33.437 04:33:22 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:33.696 Asynchronous Event Request test 00:09:33.696 Attached to 0000:00:10.0 00:09:33.696 Attached to 0000:00:11.0 00:09:33.696 Attached to 0000:00:13.0 00:09:33.696 Attached to 0000:00:12.0 00:09:33.696 Reset controller to setup AER completions for this process 00:09:33.696 Registering asynchronous event callbacks... 00:09:33.696 Getting orig temperature thresholds of all controllers 00:09:33.696 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:33.696 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:33.696 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:33.696 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:33.696 Setting all controllers temperature threshold low to trigger AER 00:09:33.696 Waiting for all controllers temperature threshold to be set lower 00:09:33.696 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:33.696 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:33.696 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:33.696 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:33.696 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:33.696 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:33.696 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:33.696 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:33.696 Waiting for all controllers to trigger AER and reset threshold 00:09:33.696 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:33.696 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:33.696 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:33.696 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:33.696 Cleaning up... 00:09:33.955 00:09:33.955 real 0m0.287s 00:09:33.955 user 0m0.098s 00:09:33.955 sys 0m0.142s 00:09:33.955 04:33:23 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.955 04:33:23 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:33.955 ************************************ 00:09:33.955 END TEST nvme_single_aen 00:09:33.955 ************************************ 00:09:33.955 04:33:23 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:33.955 04:33:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:33.955 04:33:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.955 04:33:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.955 ************************************ 00:09:33.955 START TEST nvme_doorbell_aers 00:09:33.955 ************************************ 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:33.955 04:33:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:34.214 [2024-10-15 04:33:23.691595] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:09:44.252 Executing: test_write_invalid_db 00:09:44.252 Waiting for AER completion... 00:09:44.252 Failure: test_write_invalid_db 00:09:44.252 00:09:44.252 Executing: test_invalid_db_write_overflow_sq 00:09:44.252 Waiting for AER completion... 00:09:44.252 Failure: test_invalid_db_write_overflow_sq 00:09:44.252 00:09:44.252 Executing: test_invalid_db_write_overflow_cq 00:09:44.252 Waiting for AER completion... 00:09:44.252 Failure: test_invalid_db_write_overflow_cq 00:09:44.252 00:09:44.252 04:33:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:44.252 04:33:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:44.252 [2024-10-15 04:33:33.736498] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:09:54.241 Executing: test_write_invalid_db 00:09:54.241 Waiting for AER completion... 00:09:54.241 Failure: test_write_invalid_db 00:09:54.241 00:09:54.241 Executing: test_invalid_db_write_overflow_sq 00:09:54.241 Waiting for AER completion... 00:09:54.241 Failure: test_invalid_db_write_overflow_sq 00:09:54.241 00:09:54.241 Executing: test_invalid_db_write_overflow_cq 00:09:54.241 Waiting for AER completion... 00:09:54.241 Failure: test_invalid_db_write_overflow_cq 00:09:54.241 00:09:54.241 04:33:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:54.241 04:33:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:54.499 [2024-10-15 04:33:43.812901] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:04.475 Executing: test_write_invalid_db 00:10:04.475 Waiting for AER completion... 00:10:04.475 Failure: test_write_invalid_db 00:10:04.475 00:10:04.475 Executing: test_invalid_db_write_overflow_sq 00:10:04.475 Waiting for AER completion... 00:10:04.475 Failure: test_invalid_db_write_overflow_sq 00:10:04.475 00:10:04.475 Executing: test_invalid_db_write_overflow_cq 00:10:04.475 Waiting for AER completion... 00:10:04.475 Failure: test_invalid_db_write_overflow_cq 00:10:04.475 00:10:04.475 04:33:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:04.475 04:33:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:04.475 [2024-10-15 04:33:53.841772] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 Executing: test_write_invalid_db 00:10:14.451 Waiting for AER completion... 00:10:14.451 Failure: test_write_invalid_db 00:10:14.451 00:10:14.451 Executing: test_invalid_db_write_overflow_sq 00:10:14.451 Waiting for AER completion... 00:10:14.451 Failure: test_invalid_db_write_overflow_sq 00:10:14.451 00:10:14.451 Executing: test_invalid_db_write_overflow_cq 00:10:14.451 Waiting for AER completion... 00:10:14.451 Failure: test_invalid_db_write_overflow_cq 00:10:14.451 00:10:14.451 00:10:14.451 real 0m40.330s 00:10:14.451 user 0m28.524s 00:10:14.451 sys 0m11.438s 00:10:14.451 04:34:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.451 04:34:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:14.451 ************************************ 00:10:14.451 END TEST nvme_doorbell_aers 00:10:14.451 ************************************ 00:10:14.451 04:34:03 nvme -- nvme/nvme.sh@97 -- # uname 00:10:14.451 04:34:03 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:14.451 04:34:03 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:14.451 04:34:03 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:14.451 04:34:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.451 04:34:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.451 ************************************ 00:10:14.451 START TEST nvme_multi_aen 00:10:14.451 ************************************ 00:10:14.451 04:34:03 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:14.451 [2024-10-15 04:34:03.935199] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 [2024-10-15 04:34:03.935311] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 [2024-10-15 04:34:03.935328] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 [2024-10-15 04:34:03.937224] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 [2024-10-15 04:34:03.937273] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 [2024-10-15 04:34:03.937289] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 [2024-10-15 04:34:03.938660] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 [2024-10-15 04:34:03.938826] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 [2024-10-15 04:34:03.938849] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 [2024-10-15 04:34:03.940210] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 [2024-10-15 04:34:03.940246] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.451 [2024-10-15 04:34:03.940260] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64821) is not found. Dropping the request. 00:10:14.710 Child process pid: 65338 00:10:14.970 [Child] Asynchronous Event Request test 00:10:14.970 [Child] Attached to 0000:00:10.0 00:10:14.970 [Child] Attached to 0000:00:11.0 00:10:14.970 [Child] Attached to 0000:00:13.0 00:10:14.970 [Child] Attached to 0000:00:12.0 00:10:14.970 [Child] Registering asynchronous event callbacks... 00:10:14.970 [Child] Getting orig temperature thresholds of all controllers 00:10:14.970 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.970 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.970 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.970 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.970 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:14.970 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.970 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.970 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.970 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.970 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.970 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.970 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.970 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.970 [Child] Cleaning up... 00:10:14.970 Asynchronous Event Request test 00:10:14.970 Attached to 0000:00:10.0 00:10:14.970 Attached to 0000:00:11.0 00:10:14.970 Attached to 0000:00:13.0 00:10:14.970 Attached to 0000:00:12.0 00:10:14.970 Reset controller to setup AER completions for this process 00:10:14.970 Registering asynchronous event callbacks... 00:10:14.970 Getting orig temperature thresholds of all controllers 00:10:14.970 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.970 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.970 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.970 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:14.970 Setting all controllers temperature threshold low to trigger AER 00:10:14.970 Waiting for all controllers temperature threshold to be set lower 00:10:14.970 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.970 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:14.970 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.970 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:14.970 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.970 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:14.970 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:14.970 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:14.970 Waiting for all controllers to trigger AER and reset threshold 00:10:14.970 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.970 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.970 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.970 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.970 Cleaning up... 00:10:14.970 00:10:14.970 real 0m0.625s 00:10:14.970 user 0m0.200s 00:10:14.970 sys 0m0.315s 00:10:14.970 04:34:04 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.970 04:34:04 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:14.970 ************************************ 00:10:14.970 END TEST nvme_multi_aen 00:10:14.970 ************************************ 00:10:14.970 04:34:04 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:14.970 04:34:04 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:14.970 04:34:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.970 04:34:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.970 ************************************ 00:10:14.970 START TEST nvme_startup 00:10:14.970 ************************************ 00:10:14.970 04:34:04 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:15.229 Initializing NVMe Controllers 00:10:15.229 Attached to 0000:00:10.0 00:10:15.229 Attached to 0000:00:11.0 00:10:15.229 Attached to 0000:00:13.0 00:10:15.229 Attached to 0000:00:12.0 00:10:15.229 Initialization complete. 00:10:15.229 Time used:183629.938 (us). 00:10:15.229 00:10:15.229 real 0m0.283s 00:10:15.229 user 0m0.106s 00:10:15.229 sys 0m0.136s 00:10:15.229 04:34:04 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.229 ************************************ 00:10:15.229 END TEST nvme_startup 00:10:15.229 ************************************ 00:10:15.229 04:34:04 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:15.229 04:34:04 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:15.229 04:34:04 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:15.229 04:34:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.229 04:34:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:15.488 ************************************ 00:10:15.488 START TEST nvme_multi_secondary 00:10:15.488 ************************************ 00:10:15.488 04:34:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:10:15.488 04:34:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65393 00:10:15.488 04:34:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:15.488 04:34:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65394 00:10:15.488 04:34:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:15.488 04:34:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:18.783 Initializing NVMe Controllers 00:10:18.783 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:18.783 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:18.783 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:18.783 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:18.783 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:18.783 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:18.783 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:18.783 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:18.783 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:18.783 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:18.783 Initialization complete. Launching workers. 00:10:18.783 ======================================================== 00:10:18.784 Latency(us) 00:10:18.784 Device Information : IOPS MiB/s Average min max 00:10:18.784 PCIE (0000:00:10.0) NSID 1 from core 1: 5009.68 19.57 3191.52 1459.70 9604.17 00:10:18.784 PCIE (0000:00:11.0) NSID 1 from core 1: 5009.68 19.57 3193.48 1553.41 9299.19 00:10:18.784 PCIE (0000:00:13.0) NSID 1 from core 1: 5009.68 19.57 3193.80 1632.19 8076.32 00:10:18.784 PCIE (0000:00:12.0) NSID 1 from core 1: 5009.68 19.57 3194.22 1617.05 8641.90 00:10:18.784 PCIE (0000:00:12.0) NSID 2 from core 1: 5009.68 19.57 3194.37 1595.62 8259.76 00:10:18.784 PCIE (0000:00:12.0) NSID 3 from core 1: 5009.68 19.57 3194.93 1457.18 9093.14 00:10:18.784 ======================================================== 00:10:18.784 Total : 30058.11 117.41 3193.72 1457.18 9604.17 00:10:18.784 00:10:19.042 Initializing NVMe Controllers 00:10:19.042 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:19.042 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:19.042 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:19.042 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:19.042 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:19.042 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:19.042 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:19.042 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:19.042 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:19.042 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:19.042 Initialization complete. Launching workers. 00:10:19.042 ======================================================== 00:10:19.042 Latency(us) 00:10:19.042 Device Information : IOPS MiB/s Average min max 00:10:19.042 PCIE (0000:00:10.0) NSID 1 from core 2: 3006.88 11.75 5319.31 1195.95 14946.53 00:10:19.042 PCIE (0000:00:11.0) NSID 1 from core 2: 3006.88 11.75 5320.60 1174.41 14538.08 00:10:19.042 PCIE (0000:00:13.0) NSID 1 from core 2: 3006.88 11.75 5320.06 1217.15 15345.75 00:10:19.042 PCIE (0000:00:12.0) NSID 1 from core 2: 3006.88 11.75 5320.39 1227.43 15437.81 00:10:19.042 PCIE (0000:00:12.0) NSID 2 from core 2: 3006.88 11.75 5327.37 1238.88 14167.32 00:10:19.042 PCIE (0000:00:12.0) NSID 3 from core 2: 3006.88 11.75 5327.29 1247.86 13846.04 00:10:19.042 ======================================================== 00:10:19.042 Total : 18041.31 70.47 5322.50 1174.41 15437.81 00:10:19.042 00:10:19.042 04:34:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65393 00:10:20.945 Initializing NVMe Controllers 00:10:20.945 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:20.945 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:20.945 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:20.945 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:20.945 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:20.945 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:20.945 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:20.945 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:20.945 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:20.945 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:20.945 Initialization complete. Launching workers. 00:10:20.945 ======================================================== 00:10:20.945 Latency(us) 00:10:20.945 Device Information : IOPS MiB/s Average min max 00:10:20.945 PCIE (0000:00:10.0) NSID 1 from core 0: 7915.65 30.92 2019.74 925.77 9064.40 00:10:20.945 PCIE (0000:00:11.0) NSID 1 from core 0: 7915.85 30.92 2020.79 944.52 8577.59 00:10:20.945 PCIE (0000:00:13.0) NSID 1 from core 0: 7915.85 30.92 2020.75 886.02 8871.06 00:10:20.945 PCIE (0000:00:12.0) NSID 1 from core 0: 7915.85 30.92 2020.72 844.94 9193.46 00:10:20.945 PCIE (0000:00:12.0) NSID 2 from core 0: 7915.85 30.92 2020.69 805.58 8884.92 00:10:20.945 PCIE (0000:00:12.0) NSID 3 from core 0: 7919.05 30.93 2019.85 762.65 8966.54 00:10:20.945 ======================================================== 00:10:20.945 Total : 47498.08 185.54 2020.42 762.65 9193.46 00:10:20.945 00:10:20.945 04:34:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65394 00:10:20.945 04:34:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65464 00:10:20.945 04:34:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:20.945 04:34:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:20.945 04:34:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65465 00:10:20.945 04:34:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:24.268 Initializing NVMe Controllers 00:10:24.268 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:24.268 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:24.268 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:24.268 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:24.268 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:24.268 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:24.268 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:24.268 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:24.268 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:24.268 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:24.268 Initialization complete. Launching workers. 00:10:24.268 ======================================================== 00:10:24.268 Latency(us) 00:10:24.268 Device Information : IOPS MiB/s Average min max 00:10:24.268 PCIE (0000:00:10.0) NSID 1 from core 1: 5338.29 20.85 2995.10 972.07 6673.79 00:10:24.268 PCIE (0000:00:11.0) NSID 1 from core 1: 5338.29 20.85 2996.93 1007.43 6740.36 00:10:24.268 PCIE (0000:00:13.0) NSID 1 from core 1: 5338.29 20.85 2997.04 891.87 7760.76 00:10:24.268 PCIE (0000:00:12.0) NSID 1 from core 1: 5338.29 20.85 2997.32 1024.94 7178.31 00:10:24.268 PCIE (0000:00:12.0) NSID 2 from core 1: 5338.29 20.85 2997.62 997.94 7253.10 00:10:24.268 PCIE (0000:00:12.0) NSID 3 from core 1: 5338.29 20.85 2997.68 996.75 6833.87 00:10:24.268 ======================================================== 00:10:24.268 Total : 32029.72 125.12 2996.95 891.87 7760.76 00:10:24.268 00:10:24.268 Initializing NVMe Controllers 00:10:24.268 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:24.268 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:24.268 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:24.268 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:24.268 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:24.268 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:24.268 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:24.268 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:24.268 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:24.268 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:24.268 Initialization complete. Launching workers. 00:10:24.268 ======================================================== 00:10:24.268 Latency(us) 00:10:24.268 Device Information : IOPS MiB/s Average min max 00:10:24.268 PCIE (0000:00:10.0) NSID 1 from core 0: 4890.38 19.10 3269.20 1043.53 6816.80 00:10:24.268 PCIE (0000:00:11.0) NSID 1 from core 0: 4890.38 19.10 3271.07 1050.96 6904.63 00:10:24.268 PCIE (0000:00:13.0) NSID 1 from core 0: 4890.38 19.10 3271.05 1070.81 6615.05 00:10:24.268 PCIE (0000:00:12.0) NSID 1 from core 0: 4890.38 19.10 3271.08 1057.62 6904.89 00:10:24.268 PCIE (0000:00:12.0) NSID 2 from core 0: 4890.38 19.10 3271.17 1068.09 7035.35 00:10:24.268 PCIE (0000:00:12.0) NSID 3 from core 0: 4890.38 19.10 3271.14 1067.15 7077.02 00:10:24.268 ======================================================== 00:10:24.268 Total : 29342.28 114.62 3270.79 1043.53 7077.02 00:10:24.268 00:10:26.170 Initializing NVMe Controllers 00:10:26.170 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:26.170 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:26.170 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:26.170 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:26.170 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:26.170 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:26.170 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:26.170 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:26.170 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:26.170 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:26.170 Initialization complete. Launching workers. 00:10:26.170 ======================================================== 00:10:26.170 Latency(us) 00:10:26.170 Device Information : IOPS MiB/s Average min max 00:10:26.170 PCIE (0000:00:10.0) NSID 1 from core 2: 3109.56 12.15 5143.63 1206.40 12901.15 00:10:26.170 PCIE (0000:00:11.0) NSID 1 from core 2: 3109.56 12.15 5145.01 1104.36 12474.81 00:10:26.170 PCIE (0000:00:13.0) NSID 1 from core 2: 3109.56 12.15 5144.68 1066.53 12936.77 00:10:26.170 PCIE (0000:00:12.0) NSID 1 from core 2: 3109.56 12.15 5144.15 1100.20 13182.81 00:10:26.170 PCIE (0000:00:12.0) NSID 2 from core 2: 3109.56 12.15 5144.79 1091.30 12851.79 00:10:26.170 PCIE (0000:00:12.0) NSID 3 from core 2: 3109.56 12.15 5144.72 1185.35 12861.38 00:10:26.170 ======================================================== 00:10:26.170 Total : 18657.33 72.88 5144.50 1066.53 13182.81 00:10:26.170 00:10:26.427 ************************************ 00:10:26.427 END TEST nvme_multi_secondary 00:10:26.427 ************************************ 00:10:26.427 04:34:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65464 00:10:26.427 04:34:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65465 00:10:26.427 00:10:26.427 real 0m10.989s 00:10:26.428 user 0m18.511s 00:10:26.428 sys 0m1.048s 00:10:26.428 04:34:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.428 04:34:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:26.428 04:34:15 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:26.428 04:34:15 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:26.428 04:34:15 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/64402 ]] 00:10:26.428 04:34:15 nvme -- common/autotest_common.sh@1090 -- # kill 64402 00:10:26.428 04:34:15 nvme -- common/autotest_common.sh@1091 -- # wait 64402 00:10:26.428 [2024-10-15 04:34:15.797888] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.798945] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.799000] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.799023] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.801929] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.802135] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.802160] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.802181] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.805155] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.805199] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.805218] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.805238] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.808044] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.808089] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.808105] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.428 [2024-10-15 04:34:15.808123] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65331) is not found. Dropping the request. 00:10:26.687 04:34:15 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:10:26.687 04:34:15 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:10:26.687 04:34:15 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:26.687 04:34:15 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:26.687 04:34:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.687 04:34:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:26.687 ************************************ 00:10:26.687 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:26.687 ************************************ 00:10:26.687 04:34:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:26.687 * Looking for test storage... 00:10:26.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:26.687 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:26.687 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:26.687 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:26.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.981 --rc genhtml_branch_coverage=1 00:10:26.981 --rc genhtml_function_coverage=1 00:10:26.981 --rc genhtml_legend=1 00:10:26.981 --rc geninfo_all_blocks=1 00:10:26.981 --rc geninfo_unexecuted_blocks=1 00:10:26.981 00:10:26.981 ' 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:26.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.981 --rc genhtml_branch_coverage=1 00:10:26.981 --rc genhtml_function_coverage=1 00:10:26.981 --rc genhtml_legend=1 00:10:26.981 --rc geninfo_all_blocks=1 00:10:26.981 --rc geninfo_unexecuted_blocks=1 00:10:26.981 00:10:26.981 ' 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:26.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.981 --rc genhtml_branch_coverage=1 00:10:26.981 --rc genhtml_function_coverage=1 00:10:26.981 --rc genhtml_legend=1 00:10:26.981 --rc geninfo_all_blocks=1 00:10:26.981 --rc geninfo_unexecuted_blocks=1 00:10:26.981 00:10:26.981 ' 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:26.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.981 --rc genhtml_branch_coverage=1 00:10:26.981 --rc genhtml_function_coverage=1 00:10:26.981 --rc genhtml_legend=1 00:10:26.981 --rc geninfo_all_blocks=1 00:10:26.981 --rc geninfo_unexecuted_blocks=1 00:10:26.981 00:10:26.981 ' 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65631 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65631 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 65631 ']' 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.981 04:34:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:26.981 [2024-10-15 04:34:16.443878] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:10:26.981 [2024-10-15 04:34:16.444586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65631 ] 00:10:27.240 [2024-10-15 04:34:16.636113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.498 [2024-10-15 04:34:16.754728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.498 [2024-10-15 04:34:16.754943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.498 [2024-10-15 04:34:16.755096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.498 [2024-10-15 04:34:16.755117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:28.434 nvme0n1 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_o219F.txt 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:28.434 true 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728966857 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65654 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:28.434 04:34:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:30.337 [2024-10-15 04:34:19.732079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:30.337 [2024-10-15 04:34:19.732652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:30.337 [2024-10-15 04:34:19.732800] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:30.337 [2024-10-15 04:34:19.732936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:30.337 [2024-10-15 04:34:19.735518] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65654 00:10:30.337 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65654 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65654 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_o219F.txt 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:30.337 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_o219F.txt 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65631 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 65631 ']' 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 65631 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65631 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:30.596 killing process with pid 65631 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65631' 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 65631 00:10:30.596 04:34:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 65631 00:10:33.131 04:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:33.131 04:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:33.131 ************************************ 00:10:33.131 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:33.131 ************************************ 00:10:33.131 00:10:33.131 real 0m6.586s 00:10:33.131 user 0m23.086s 00:10:33.131 sys 0m0.809s 00:10:33.131 04:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.131 04:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:33.390 04:34:22 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:33.390 04:34:22 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:33.390 04:34:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:33.390 04:34:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.390 04:34:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:33.390 ************************************ 00:10:33.390 START TEST nvme_fio 00:10:33.390 ************************************ 00:10:33.390 04:34:22 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:10:33.390 04:34:22 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:33.390 04:34:22 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:33.390 04:34:22 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:33.390 04:34:22 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:33.390 04:34:22 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:10:33.390 04:34:22 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:33.390 04:34:22 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:33.390 04:34:22 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:33.390 04:34:22 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:33.390 04:34:22 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:33.390 04:34:22 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:33.390 04:34:22 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:33.390 04:34:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:33.390 04:34:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:33.390 04:34:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:33.649 04:34:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:33.649 04:34:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:33.908 04:34:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:33.908 04:34:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:33.908 04:34:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:34.167 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:34.167 fio-3.35 00:10:34.167 Starting 1 thread 00:10:38.358 00:10:38.358 test: (groupid=0, jobs=1): err= 0: pid=65807: Tue Oct 15 04:34:26 2024 00:10:38.358 read: IOPS=20.9k, BW=81.5MiB/s (85.4MB/s)(163MiB/2001msec) 00:10:38.358 slat (nsec): min=3999, max=91233, avg=5099.79, stdev=1703.42 00:10:38.358 clat (usec): min=625, max=11961, avg=3057.20, stdev=695.76 00:10:38.358 lat (usec): min=637, max=12053, avg=3062.30, stdev=696.76 00:10:38.358 clat percentiles (usec): 00:10:38.358 | 1.00th=[ 2606], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:10:38.358 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:10:38.358 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3556], 00:10:38.358 | 99.00th=[ 7111], 99.50th=[ 8029], 99.90th=[ 8586], 99.95th=[ 9110], 00:10:38.358 | 99.99th=[11600] 00:10:38.358 bw ( KiB/s): min=77504, max=89120, per=97.64%, avg=81450.67, stdev=6642.78, samples=3 00:10:38.358 iops : min=19376, max=22280, avg=20362.67, stdev=1660.70, samples=3 00:10:38.358 write: IOPS=20.8k, BW=81.1MiB/s (85.1MB/s)(162MiB/2001msec); 0 zone resets 00:10:38.358 slat (nsec): min=4141, max=86526, avg=5260.80, stdev=1623.43 00:10:38.358 clat (usec): min=728, max=11756, avg=3063.42, stdev=699.69 00:10:38.358 lat (usec): min=741, max=11801, avg=3068.68, stdev=700.65 00:10:38.358 clat percentiles (usec): 00:10:38.358 | 1.00th=[ 2638], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:10:38.358 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:10:38.358 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3556], 00:10:38.358 | 99.00th=[ 7111], 99.50th=[ 8029], 99.90th=[ 8586], 99.95th=[ 9372], 00:10:38.358 | 99.99th=[11207] 00:10:38.358 bw ( KiB/s): min=77296, max=89328, per=98.17%, avg=81568.00, stdev=6731.78, samples=3 00:10:38.358 iops : min=19324, max=22332, avg=20392.00, stdev=1682.95, samples=3 00:10:38.358 lat (usec) : 750=0.01%, 1000=0.01% 00:10:38.358 lat (msec) : 2=0.19%, 4=96.06%, 10=3.71%, 20=0.04% 00:10:38.358 cpu : usr=99.00%, sys=0.35%, ctx=4, majf=0, minf=609 00:10:38.358 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:38.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.358 issued rwts: total=41731,41564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.358 00:10:38.358 Run status group 0 (all jobs): 00:10:38.358 READ: bw=81.5MiB/s (85.4MB/s), 81.5MiB/s-81.5MiB/s (85.4MB/s-85.4MB/s), io=163MiB (171MB), run=2001-2001msec 00:10:38.358 WRITE: bw=81.1MiB/s (85.1MB/s), 81.1MiB/s-81.1MiB/s (85.1MB/s-85.1MB/s), io=162MiB (170MB), run=2001-2001msec 00:10:38.358 ----------------------------------------------------- 00:10:38.358 Suppressions used: 00:10:38.358 count bytes template 00:10:38.358 1 32 /usr/src/fio/parse.c 00:10:38.358 1 8 libtcmalloc_minimal.so 00:10:38.358 ----------------------------------------------------- 00:10:38.358 00:10:38.358 04:34:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:38.358 04:34:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:38.358 04:34:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:38.358 04:34:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:38.358 04:34:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:38.358 04:34:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:38.358 04:34:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:38.358 04:34:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:38.358 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:38.358 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:38.358 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:38.358 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:38.358 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:38.358 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:38.358 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:38.358 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:38.358 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:38.358 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:38.358 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:38.628 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:38.628 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:38.628 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:38.628 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:38.628 04:34:27 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:38.628 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:38.628 fio-3.35 00:10:38.628 Starting 1 thread 00:10:42.849 00:10:42.849 test: (groupid=0, jobs=1): err= 0: pid=65873: Tue Oct 15 04:34:31 2024 00:10:42.849 read: IOPS=22.1k, BW=86.4MiB/s (90.6MB/s)(173MiB/2001msec) 00:10:42.849 slat (nsec): min=3853, max=74035, avg=4704.75, stdev=1148.39 00:10:42.849 clat (usec): min=196, max=10633, avg=2885.78, stdev=411.91 00:10:42.849 lat (usec): min=201, max=10692, avg=2890.49, stdev=412.40 00:10:42.849 clat percentiles (usec): 00:10:42.849 | 1.00th=[ 2180], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2769], 00:10:42.849 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:10:42.849 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3064], 00:10:42.849 | 99.00th=[ 4752], 99.50th=[ 5800], 99.90th=[ 8029], 99.95th=[ 8356], 00:10:42.849 | 99.99th=[10421] 00:10:42.849 bw ( KiB/s): min=85576, max=89864, per=99.11%, avg=87730.67, stdev=2144.08, samples=3 00:10:42.849 iops : min=21394, max=22466, avg=21932.67, stdev=536.02, samples=3 00:10:42.849 write: IOPS=22.0k, BW=85.9MiB/s (90.0MB/s)(172MiB/2001msec); 0 zone resets 00:10:42.849 slat (nsec): min=4100, max=77249, avg=4843.38, stdev=1099.55 00:10:42.849 clat (usec): min=222, max=10562, avg=2892.03, stdev=420.33 00:10:42.849 lat (usec): min=227, max=10572, avg=2896.87, stdev=420.80 00:10:42.849 clat percentiles (usec): 00:10:42.849 | 1.00th=[ 2147], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2769], 00:10:42.849 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:10:42.849 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3064], 00:10:42.849 | 99.00th=[ 4817], 99.50th=[ 5997], 99.90th=[ 8029], 99.95th=[ 8586], 00:10:42.849 | 99.99th=[10159] 00:10:42.849 bw ( KiB/s): min=86648, max=89752, per=99.99%, avg=87912.00, stdev=1630.20, samples=3 00:10:42.849 iops : min=21662, max=22438, avg=21978.00, stdev=407.55, samples=3 00:10:42.849 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:42.849 lat (msec) : 2=0.68%, 4=97.87%, 10=1.39%, 20=0.02% 00:10:42.849 cpu : usr=99.35%, sys=0.10%, ctx=2, majf=0, minf=608 00:10:42.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:42.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.849 issued rwts: total=44279,43983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.849 00:10:42.849 Run status group 0 (all jobs): 00:10:42.849 READ: bw=86.4MiB/s (90.6MB/s), 86.4MiB/s-86.4MiB/s (90.6MB/s-90.6MB/s), io=173MiB (181MB), run=2001-2001msec 00:10:42.849 WRITE: bw=85.9MiB/s (90.0MB/s), 85.9MiB/s-85.9MiB/s (90.0MB/s-90.0MB/s), io=172MiB (180MB), run=2001-2001msec 00:10:42.849 ----------------------------------------------------- 00:10:42.849 Suppressions used: 00:10:42.849 count bytes template 00:10:42.849 1 32 /usr/src/fio/parse.c 00:10:42.849 1 8 libtcmalloc_minimal.so 00:10:42.849 ----------------------------------------------------- 00:10:42.849 00:10:42.849 04:34:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:42.849 04:34:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:42.849 04:34:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:42.849 04:34:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:42.849 04:34:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:42.849 04:34:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:43.417 04:34:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:43.417 04:34:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:43.417 04:34:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:43.417 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:43.417 fio-3.35 00:10:43.417 Starting 1 thread 00:10:47.608 00:10:47.608 test: (groupid=0, jobs=1): err= 0: pid=65939: Tue Oct 15 04:34:36 2024 00:10:47.608 read: IOPS=21.7k, BW=84.6MiB/s (88.7MB/s)(169MiB/2001msec) 00:10:47.608 slat (nsec): min=4007, max=80976, avg=4793.00, stdev=1372.89 00:10:47.608 clat (usec): min=244, max=11413, avg=2950.60, stdev=580.80 00:10:47.608 lat (usec): min=249, max=11494, avg=2955.39, stdev=581.68 00:10:47.608 clat percentiles (usec): 00:10:47.608 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:10:47.608 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:10:47.608 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3195], 00:10:47.608 | 99.00th=[ 6521], 99.50th=[ 7570], 99.90th=[ 8225], 99.95th=[ 9110], 00:10:47.608 | 99.99th=[11076] 00:10:47.608 bw ( KiB/s): min=83464, max=87208, per=98.56%, avg=85357.33, stdev=1872.36, samples=3 00:10:47.608 iops : min=20866, max=21802, avg=21339.33, stdev=468.09, samples=3 00:10:47.608 write: IOPS=21.5k, BW=84.0MiB/s (88.0MB/s)(168MiB/2001msec); 0 zone resets 00:10:47.608 slat (nsec): min=4104, max=46579, avg=4951.46, stdev=1386.85 00:10:47.608 clat (usec): min=189, max=11195, avg=2955.55, stdev=587.68 00:10:47.608 lat (usec): min=194, max=11208, avg=2960.50, stdev=588.55 00:10:47.608 clat percentiles (usec): 00:10:47.608 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:10:47.608 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:10:47.608 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3195], 00:10:47.608 | 99.00th=[ 6587], 99.50th=[ 7570], 99.90th=[ 8225], 99.95th=[ 9372], 00:10:47.608 | 99.99th=[10814] 00:10:47.608 bw ( KiB/s): min=83664, max=87816, per=99.49%, avg=85530.67, stdev=2107.42, samples=3 00:10:47.608 iops : min=20916, max=21954, avg=21382.67, stdev=526.86, samples=3 00:10:47.608 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:47.608 lat (msec) : 2=0.05%, 4=97.34%, 10=2.53%, 20=0.03% 00:10:47.608 cpu : usr=99.35%, sys=0.05%, ctx=5, majf=0, minf=609 00:10:47.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:47.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.608 issued rwts: total=43325,43005,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.608 00:10:47.608 Run status group 0 (all jobs): 00:10:47.608 READ: bw=84.6MiB/s (88.7MB/s), 84.6MiB/s-84.6MiB/s (88.7MB/s-88.7MB/s), io=169MiB (177MB), run=2001-2001msec 00:10:47.608 WRITE: bw=84.0MiB/s (88.0MB/s), 84.0MiB/s-84.0MiB/s (88.0MB/s-88.0MB/s), io=168MiB (176MB), run=2001-2001msec 00:10:47.608 ----------------------------------------------------- 00:10:47.608 Suppressions used: 00:10:47.608 count bytes template 00:10:47.608 1 32 /usr/src/fio/parse.c 00:10:47.608 1 8 libtcmalloc_minimal.so 00:10:47.608 ----------------------------------------------------- 00:10:47.608 00:10:47.608 04:34:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:47.608 04:34:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:47.608 04:34:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:47.608 04:34:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:47.867 04:34:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:47.867 04:34:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:48.127 04:34:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:48.127 04:34:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:48.127 04:34:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:48.387 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:48.387 fio-3.35 00:10:48.387 Starting 1 thread 00:10:53.746 00:10:53.746 test: (groupid=0, jobs=1): err= 0: pid=66000: Tue Oct 15 04:34:42 2024 00:10:53.746 read: IOPS=21.7k, BW=84.9MiB/s (89.0MB/s)(170MiB/2001msec) 00:10:53.746 slat (usec): min=3, max=211, avg= 4.81, stdev= 1.93 00:10:53.746 clat (usec): min=341, max=11780, avg=2939.55, stdev=646.76 00:10:53.746 lat (usec): min=346, max=11848, avg=2944.36, stdev=647.63 00:10:53.746 clat percentiles (usec): 00:10:53.746 | 1.00th=[ 2114], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:10:53.746 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:10:53.746 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3294], 00:10:53.746 | 99.00th=[ 6587], 99.50th=[ 8094], 99.90th=[ 9241], 99.95th=[ 9634], 00:10:53.746 | 99.99th=[11338] 00:10:53.746 bw ( KiB/s): min=86392, max=87728, per=100.00%, avg=87000.00, stdev=676.04, samples=3 00:10:53.746 iops : min=21598, max=21932, avg=21750.00, stdev=169.01, samples=3 00:10:53.746 write: IOPS=21.6k, BW=84.2MiB/s (88.3MB/s)(169MiB/2001msec); 0 zone resets 00:10:53.746 slat (nsec): min=4073, max=99403, avg=4999.33, stdev=1544.24 00:10:53.746 clat (usec): min=200, max=11573, avg=2946.88, stdev=659.61 00:10:53.746 lat (usec): min=205, max=11586, avg=2951.88, stdev=660.43 00:10:53.746 clat percentiles (usec): 00:10:53.746 | 1.00th=[ 2147], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:10:53.746 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:10:53.746 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3064], 95.00th=[ 3326], 00:10:53.746 | 99.00th=[ 6783], 99.50th=[ 8160], 99.90th=[ 9241], 99.95th=[ 9634], 00:10:53.746 | 99.99th=[11207] 00:10:53.746 bw ( KiB/s): min=86288, max=88744, per=100.00%, avg=87208.00, stdev=1338.87, samples=3 00:10:53.746 iops : min=21572, max=22186, avg=21802.00, stdev=334.72, samples=3 00:10:53.746 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:53.746 lat (msec) : 2=0.72%, 4=96.36%, 10=2.84%, 20=0.04% 00:10:53.746 cpu : usr=99.25%, sys=0.05%, ctx=29, majf=0, minf=606 00:10:53.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:53.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.746 issued rwts: total=43478,43152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.746 00:10:53.746 Run status group 0 (all jobs): 00:10:53.746 READ: bw=84.9MiB/s (89.0MB/s), 84.9MiB/s-84.9MiB/s (89.0MB/s-89.0MB/s), io=170MiB (178MB), run=2001-2001msec 00:10:53.746 WRITE: bw=84.2MiB/s (88.3MB/s), 84.2MiB/s-84.2MiB/s (88.3MB/s-88.3MB/s), io=169MiB (177MB), run=2001-2001msec 00:10:53.746 ----------------------------------------------------- 00:10:53.746 Suppressions used: 00:10:53.746 count bytes template 00:10:53.746 1 32 /usr/src/fio/parse.c 00:10:53.746 1 8 libtcmalloc_minimal.so 00:10:53.746 ----------------------------------------------------- 00:10:53.746 00:10:53.746 04:34:42 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:53.746 04:34:42 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:53.746 00:10:53.746 real 0m20.287s 00:10:53.746 user 0m14.815s 00:10:53.746 sys 0m6.506s 00:10:53.746 04:34:42 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.746 ************************************ 00:10:53.746 END TEST nvme_fio 00:10:53.746 ************************************ 00:10:53.746 04:34:42 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:53.746 ************************************ 00:10:53.746 END TEST nvme 00:10:53.746 ************************************ 00:10:53.746 00:10:53.746 real 1m35.479s 00:10:53.746 user 3m43.665s 00:10:53.746 sys 0m25.577s 00:10:53.746 04:34:42 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.746 04:34:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:53.746 04:34:43 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:53.746 04:34:43 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:53.746 04:34:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:53.746 04:34:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.746 04:34:43 -- common/autotest_common.sh@10 -- # set +x 00:10:53.746 ************************************ 00:10:53.746 START TEST nvme_scc 00:10:53.746 ************************************ 00:10:53.746 04:34:43 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:53.746 * Looking for test storage... 00:10:53.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:53.746 04:34:43 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:53.746 04:34:43 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:53.746 04:34:43 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:54.005 04:34:43 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.005 04:34:43 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:54.006 04:34:43 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.006 04:34:43 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:54.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.006 --rc genhtml_branch_coverage=1 00:10:54.006 --rc genhtml_function_coverage=1 00:10:54.006 --rc genhtml_legend=1 00:10:54.006 --rc geninfo_all_blocks=1 00:10:54.006 --rc geninfo_unexecuted_blocks=1 00:10:54.006 00:10:54.006 ' 00:10:54.006 04:34:43 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:54.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.006 --rc genhtml_branch_coverage=1 00:10:54.006 --rc genhtml_function_coverage=1 00:10:54.006 --rc genhtml_legend=1 00:10:54.006 --rc geninfo_all_blocks=1 00:10:54.006 --rc geninfo_unexecuted_blocks=1 00:10:54.006 00:10:54.006 ' 00:10:54.006 04:34:43 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:54.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.006 --rc genhtml_branch_coverage=1 00:10:54.006 --rc genhtml_function_coverage=1 00:10:54.006 --rc genhtml_legend=1 00:10:54.006 --rc geninfo_all_blocks=1 00:10:54.006 --rc geninfo_unexecuted_blocks=1 00:10:54.006 00:10:54.006 ' 00:10:54.006 04:34:43 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:54.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.006 --rc genhtml_branch_coverage=1 00:10:54.006 --rc genhtml_function_coverage=1 00:10:54.006 --rc genhtml_legend=1 00:10:54.006 --rc geninfo_all_blocks=1 00:10:54.006 --rc geninfo_unexecuted_blocks=1 00:10:54.006 00:10:54.006 ' 00:10:54.006 04:34:43 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.006 04:34:43 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.006 04:34:43 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.006 04:34:43 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.006 04:34:43 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.006 04:34:43 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:54.006 04:34:43 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:54.006 04:34:43 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:54.006 04:34:43 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:54.006 04:34:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:54.006 04:34:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:54.006 04:34:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:54.006 04:34:43 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:54.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:54.574 Waiting for block devices as requested 00:10:54.832 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.832 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:55.090 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:55.090 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:00.359 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:00.359 04:34:49 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:00.359 04:34:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:00.359 04:34:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:00.359 04:34:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:00.359 04:34:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.359 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.360 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:00.361 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:00.362 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:00.363 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:00.364 04:34:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:00.364 04:34:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:00.364 04:34:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:00.364 04:34:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.364 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.365 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:00.366 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.367 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:00.368 04:34:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:00.368 04:34:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:00.368 04:34:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:00.368 04:34:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.368 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:00.369 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:00.370 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.636 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.637 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.638 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:00.639 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:00.640 04:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.640 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:00.641 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:00.642 04:34:50 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:00.642 04:34:50 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:00.642 04:34:50 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:00.642 04:34:50 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:00.642 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:00.643 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:00.644 04:34:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:00.644 04:34:50 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:00.904 04:34:50 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:00.904 04:34:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:00.904 04:34:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:00.904 04:34:50 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:01.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:02.039 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.039 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.354 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.354 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.354 04:34:51 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:02.354 04:34:51 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:02.354 04:34:51 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.354 04:34:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:02.354 ************************************ 00:11:02.354 START TEST nvme_simple_copy 00:11:02.354 ************************************ 00:11:02.354 04:34:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:02.613 Initializing NVMe Controllers 00:11:02.613 Attaching to 0000:00:10.0 00:11:02.613 Controller supports SCC. Attached to 0000:00:10.0 00:11:02.613 Namespace ID: 1 size: 6GB 00:11:02.613 Initialization complete. 00:11:02.613 00:11:02.613 Controller QEMU NVMe Ctrl (12340 ) 00:11:02.613 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:02.613 Namespace Block Size:4096 00:11:02.613 Writing LBAs 0 to 63 with Random Data 00:11:02.613 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:02.613 LBAs matching Written Data: 64 00:11:02.613 00:11:02.613 real 0m0.320s 00:11:02.613 user 0m0.126s 00:11:02.613 sys 0m0.093s 00:11:02.613 ************************************ 00:11:02.613 END TEST nvme_simple_copy 00:11:02.613 ************************************ 00:11:02.613 04:34:52 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.613 04:34:52 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:02.872 ************************************ 00:11:02.872 END TEST nvme_scc 00:11:02.872 ************************************ 00:11:02.872 00:11:02.872 real 0m9.077s 00:11:02.872 user 0m1.650s 00:11:02.872 sys 0m2.413s 00:11:02.872 04:34:52 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.872 04:34:52 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:02.872 04:34:52 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:02.872 04:34:52 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:02.872 04:34:52 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:02.872 04:34:52 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:02.872 04:34:52 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:02.872 04:34:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:02.872 04:34:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.872 04:34:52 -- common/autotest_common.sh@10 -- # set +x 00:11:02.872 ************************************ 00:11:02.872 START TEST nvme_fdp 00:11:02.872 ************************************ 00:11:02.872 04:34:52 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:11:02.872 * Looking for test storage... 00:11:02.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:02.872 04:34:52 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:02.872 04:34:52 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:11:02.872 04:34:52 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:03.132 04:34:52 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.132 04:34:52 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:03.132 04:34:52 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.132 04:34:52 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:03.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.132 --rc genhtml_branch_coverage=1 00:11:03.132 --rc genhtml_function_coverage=1 00:11:03.132 --rc genhtml_legend=1 00:11:03.132 --rc geninfo_all_blocks=1 00:11:03.132 --rc geninfo_unexecuted_blocks=1 00:11:03.132 00:11:03.132 ' 00:11:03.132 04:34:52 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:03.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.132 --rc genhtml_branch_coverage=1 00:11:03.132 --rc genhtml_function_coverage=1 00:11:03.132 --rc genhtml_legend=1 00:11:03.132 --rc geninfo_all_blocks=1 00:11:03.133 --rc geninfo_unexecuted_blocks=1 00:11:03.133 00:11:03.133 ' 00:11:03.133 04:34:52 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:03.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.133 --rc genhtml_branch_coverage=1 00:11:03.133 --rc genhtml_function_coverage=1 00:11:03.133 --rc genhtml_legend=1 00:11:03.133 --rc geninfo_all_blocks=1 00:11:03.133 --rc geninfo_unexecuted_blocks=1 00:11:03.133 00:11:03.133 ' 00:11:03.133 04:34:52 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:03.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.133 --rc genhtml_branch_coverage=1 00:11:03.133 --rc genhtml_function_coverage=1 00:11:03.133 --rc genhtml_legend=1 00:11:03.133 --rc geninfo_all_blocks=1 00:11:03.133 --rc geninfo_unexecuted_blocks=1 00:11:03.133 00:11:03.133 ' 00:11:03.133 04:34:52 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.133 04:34:52 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.133 04:34:52 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.133 04:34:52 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.133 04:34:52 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.133 04:34:52 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.133 04:34:52 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.133 04:34:52 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.133 04:34:52 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:03.133 04:34:52 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:03.133 04:34:52 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:03.133 04:34:52 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.133 04:34:52 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:03.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:03.961 Waiting for block devices as requested 00:11:03.961 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:04.220 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:04.220 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:04.220 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.499 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:09.500 04:34:58 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:09.500 04:34:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:09.500 04:34:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:09.500 04:34:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:09.500 04:34:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:09.500 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:09.501 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.502 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:09.503 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:09.504 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:09.505 04:34:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:09.505 04:34:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:09.505 04:34:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:09.505 04:34:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:09.505 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:09.506 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:09.507 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.508 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.775 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:09.776 04:34:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:09.776 04:34:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:09.776 04:34:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:09.776 04:34:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.776 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:09.777 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:09.778 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.779 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.780 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:09.781 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.782 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:09.783 04:34:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:09.783 04:34:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:09.783 04:34:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:09.783 04:34:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:09.783 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.784 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:09.785 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:09.786 04:34:59 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:09.786 04:34:59 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:09.786 04:34:59 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:09.787 04:34:59 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:09.787 04:34:59 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:10.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.294 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:11.294 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:11.294 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:11.554 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:11.554 04:35:00 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:11.554 04:35:00 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:11.554 04:35:00 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.554 04:35:00 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:11.554 ************************************ 00:11:11.554 START TEST nvme_flexible_data_placement 00:11:11.554 ************************************ 00:11:11.554 04:35:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:11.814 Initializing NVMe Controllers 00:11:11.814 Attaching to 0000:00:13.0 00:11:11.814 Controller supports FDP Attached to 0000:00:13.0 00:11:11.814 Namespace ID: 1 Endurance Group ID: 1 00:11:11.814 Initialization complete. 00:11:11.814 00:11:11.814 ================================== 00:11:11.814 == FDP tests for Namespace: #01 == 00:11:11.814 ================================== 00:11:11.814 00:11:11.814 Get Feature: FDP: 00:11:11.814 ================= 00:11:11.814 Enabled: Yes 00:11:11.814 FDP configuration Index: 0 00:11:11.814 00:11:11.814 FDP configurations log page 00:11:11.814 =========================== 00:11:11.814 Number of FDP configurations: 1 00:11:11.814 Version: 0 00:11:11.814 Size: 112 00:11:11.814 FDP Configuration Descriptor: 0 00:11:11.814 Descriptor Size: 96 00:11:11.814 Reclaim Group Identifier format: 2 00:11:11.814 FDP Volatile Write Cache: Not Present 00:11:11.814 FDP Configuration: Valid 00:11:11.814 Vendor Specific Size: 0 00:11:11.814 Number of Reclaim Groups: 2 00:11:11.814 Number of Recalim Unit Handles: 8 00:11:11.814 Max Placement Identifiers: 128 00:11:11.814 Number of Namespaces Suppprted: 256 00:11:11.814 Reclaim unit Nominal Size: 6000000 bytes 00:11:11.814 Estimated Reclaim Unit Time Limit: Not Reported 00:11:11.814 RUH Desc #000: RUH Type: Initially Isolated 00:11:11.814 RUH Desc #001: RUH Type: Initially Isolated 00:11:11.814 RUH Desc #002: RUH Type: Initially Isolated 00:11:11.814 RUH Desc #003: RUH Type: Initially Isolated 00:11:11.814 RUH Desc #004: RUH Type: Initially Isolated 00:11:11.814 RUH Desc #005: RUH Type: Initially Isolated 00:11:11.814 RUH Desc #006: RUH Type: Initially Isolated 00:11:11.814 RUH Desc #007: RUH Type: Initially Isolated 00:11:11.814 00:11:11.814 FDP reclaim unit handle usage log page 00:11:11.814 ====================================== 00:11:11.814 Number of Reclaim Unit Handles: 8 00:11:11.814 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:11.814 RUH Usage Desc #001: RUH Attributes: Unused 00:11:11.814 RUH Usage Desc #002: RUH Attributes: Unused 00:11:11.814 RUH Usage Desc #003: RUH Attributes: Unused 00:11:11.814 RUH Usage Desc #004: RUH Attributes: Unused 00:11:11.814 RUH Usage Desc #005: RUH Attributes: Unused 00:11:11.814 RUH Usage Desc #006: RUH Attributes: Unused 00:11:11.814 RUH Usage Desc #007: RUH Attributes: Unused 00:11:11.814 00:11:11.814 FDP statistics log page 00:11:11.814 ======================= 00:11:11.814 Host bytes with metadata written: 913223680 00:11:11.814 Media bytes with metadata written: 913321984 00:11:11.814 Media bytes erased: 0 00:11:11.814 00:11:11.814 FDP Reclaim unit handle status 00:11:11.814 ============================== 00:11:11.814 Number of RUHS descriptors: 2 00:11:11.814 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005915 00:11:11.814 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:11.814 00:11:11.814 FDP write on placement id: 0 success 00:11:11.814 00:11:11.814 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:11.814 00:11:11.814 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:11.814 00:11:11.814 Get Feature: FDP Events for Placement handle: #0 00:11:11.814 ======================== 00:11:11.814 Number of FDP Events: 6 00:11:11.814 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:11.814 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:11.814 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:11.814 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:11.814 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:11.814 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:11.814 00:11:11.814 FDP events log page 00:11:11.814 =================== 00:11:11.814 Number of FDP events: 1 00:11:11.814 FDP Event #0: 00:11:11.814 Event Type: RU Not Written to Capacity 00:11:11.814 Placement Identifier: Valid 00:11:11.814 NSID: Valid 00:11:11.814 Location: Valid 00:11:11.814 Placement Identifier: 0 00:11:11.814 Event Timestamp: 8 00:11:11.814 Namespace Identifier: 1 00:11:11.814 Reclaim Group Identifier: 0 00:11:11.814 Reclaim Unit Handle Identifier: 0 00:11:11.814 00:11:11.814 FDP test passed 00:11:11.814 00:11:11.814 real 0m0.275s 00:11:11.814 user 0m0.083s 00:11:11.814 sys 0m0.090s 00:11:11.814 04:35:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.814 04:35:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:11.814 ************************************ 00:11:11.814 END TEST nvme_flexible_data_placement 00:11:11.814 ************************************ 00:11:11.814 00:11:11.814 real 0m9.061s 00:11:11.814 user 0m1.664s 00:11:11.814 sys 0m2.435s 00:11:11.814 04:35:01 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.814 04:35:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:11.814 ************************************ 00:11:11.814 END TEST nvme_fdp 00:11:11.814 ************************************ 00:11:12.075 04:35:01 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:12.075 04:35:01 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:12.075 04:35:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:12.075 04:35:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.075 04:35:01 -- common/autotest_common.sh@10 -- # set +x 00:11:12.075 ************************************ 00:11:12.075 START TEST nvme_rpc 00:11:12.075 ************************************ 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:12.075 * Looking for test storage... 00:11:12.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.075 04:35:01 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:12.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.075 --rc genhtml_branch_coverage=1 00:11:12.075 --rc genhtml_function_coverage=1 00:11:12.075 --rc genhtml_legend=1 00:11:12.075 --rc geninfo_all_blocks=1 00:11:12.075 --rc geninfo_unexecuted_blocks=1 00:11:12.075 00:11:12.075 ' 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:12.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.075 --rc genhtml_branch_coverage=1 00:11:12.075 --rc genhtml_function_coverage=1 00:11:12.075 --rc genhtml_legend=1 00:11:12.075 --rc geninfo_all_blocks=1 00:11:12.075 --rc geninfo_unexecuted_blocks=1 00:11:12.075 00:11:12.075 ' 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:12.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.075 --rc genhtml_branch_coverage=1 00:11:12.075 --rc genhtml_function_coverage=1 00:11:12.075 --rc genhtml_legend=1 00:11:12.075 --rc geninfo_all_blocks=1 00:11:12.075 --rc geninfo_unexecuted_blocks=1 00:11:12.075 00:11:12.075 ' 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:12.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.075 --rc genhtml_branch_coverage=1 00:11:12.075 --rc genhtml_function_coverage=1 00:11:12.075 --rc genhtml_legend=1 00:11:12.075 --rc geninfo_all_blocks=1 00:11:12.075 --rc geninfo_unexecuted_blocks=1 00:11:12.075 00:11:12.075 ' 00:11:12.075 04:35:01 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:12.075 04:35:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:11:12.075 04:35:01 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:12.076 04:35:01 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:12.076 04:35:01 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:12.334 04:35:01 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:12.334 04:35:01 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:12.334 04:35:01 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:11:12.334 04:35:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:12.334 04:35:01 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67391 00:11:12.334 04:35:01 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:12.334 04:35:01 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:12.334 04:35:01 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67391 00:11:12.334 04:35:01 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 67391 ']' 00:11:12.334 04:35:01 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.334 04:35:01 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.334 04:35:01 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.334 04:35:01 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.334 04:35:01 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.334 [2024-10-15 04:35:01.764117] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:11:12.334 [2024-10-15 04:35:01.764259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67391 ] 00:11:12.592 [2024-10-15 04:35:01.938056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:12.592 [2024-10-15 04:35:02.051807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.592 [2024-10-15 04:35:02.051891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.531 04:35:02 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.531 04:35:02 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:13.531 04:35:02 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:13.791 Nvme0n1 00:11:13.791 04:35:03 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:13.791 04:35:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:14.050 request: 00:11:14.050 { 00:11:14.050 "bdev_name": "Nvme0n1", 00:11:14.050 "filename": "non_existing_file", 00:11:14.050 "method": "bdev_nvme_apply_firmware", 00:11:14.050 "req_id": 1 00:11:14.050 } 00:11:14.050 Got JSON-RPC error response 00:11:14.050 response: 00:11:14.050 { 00:11:14.050 "code": -32603, 00:11:14.050 "message": "open file failed." 00:11:14.050 } 00:11:14.050 04:35:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:14.050 04:35:03 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:14.050 04:35:03 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:14.310 04:35:03 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:14.310 04:35:03 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67391 00:11:14.310 04:35:03 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 67391 ']' 00:11:14.310 04:35:03 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 67391 00:11:14.310 04:35:03 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:11:14.310 04:35:03 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.310 04:35:03 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67391 00:11:14.310 04:35:03 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.310 04:35:03 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.310 killing process with pid 67391 00:11:14.310 04:35:03 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67391' 00:11:14.310 04:35:03 nvme_rpc -- common/autotest_common.sh@969 -- # kill 67391 00:11:14.310 04:35:03 nvme_rpc -- common/autotest_common.sh@974 -- # wait 67391 00:11:16.844 00:11:16.844 real 0m4.681s 00:11:16.844 user 0m8.811s 00:11:16.844 sys 0m0.741s 00:11:16.844 04:35:06 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.844 04:35:06 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.844 ************************************ 00:11:16.844 END TEST nvme_rpc 00:11:16.844 ************************************ 00:11:16.844 04:35:06 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:16.844 04:35:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:16.844 04:35:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.844 04:35:06 -- common/autotest_common.sh@10 -- # set +x 00:11:16.844 ************************************ 00:11:16.844 START TEST nvme_rpc_timeouts 00:11:16.844 ************************************ 00:11:16.844 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:16.844 * Looking for test storage... 00:11:16.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:16.844 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:16.844 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:11:16.844 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:16.844 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.844 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.845 04:35:06 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:16.845 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.845 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:16.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.845 --rc genhtml_branch_coverage=1 00:11:16.845 --rc genhtml_function_coverage=1 00:11:16.845 --rc genhtml_legend=1 00:11:16.845 --rc geninfo_all_blocks=1 00:11:16.845 --rc geninfo_unexecuted_blocks=1 00:11:16.845 00:11:16.845 ' 00:11:16.845 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:16.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.845 --rc genhtml_branch_coverage=1 00:11:16.845 --rc genhtml_function_coverage=1 00:11:16.845 --rc genhtml_legend=1 00:11:16.845 --rc geninfo_all_blocks=1 00:11:16.845 --rc geninfo_unexecuted_blocks=1 00:11:16.845 00:11:16.845 ' 00:11:16.845 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:16.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.845 --rc genhtml_branch_coverage=1 00:11:16.845 --rc genhtml_function_coverage=1 00:11:16.845 --rc genhtml_legend=1 00:11:16.845 --rc geninfo_all_blocks=1 00:11:16.845 --rc geninfo_unexecuted_blocks=1 00:11:16.845 00:11:16.845 ' 00:11:16.845 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:16.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.845 --rc genhtml_branch_coverage=1 00:11:16.845 --rc genhtml_function_coverage=1 00:11:16.845 --rc genhtml_legend=1 00:11:16.845 --rc geninfo_all_blocks=1 00:11:16.845 --rc geninfo_unexecuted_blocks=1 00:11:16.845 00:11:16.845 ' 00:11:16.845 04:35:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:16.845 04:35:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67467 00:11:16.845 04:35:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67467 00:11:16.845 04:35:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67499 00:11:16.845 04:35:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:16.845 04:35:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:16.845 04:35:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67499 00:11:16.845 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 67499 ']' 00:11:16.845 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.845 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.845 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.845 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.845 04:35:06 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:17.103 [2024-10-15 04:35:06.445083] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:11:17.103 [2024-10-15 04:35:06.445214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67499 ] 00:11:17.363 [2024-10-15 04:35:06.618476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:17.363 [2024-10-15 04:35:06.738449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.363 [2024-10-15 04:35:06.738486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.299 04:35:07 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:18.299 04:35:07 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:11:18.299 Checking default timeout settings: 00:11:18.299 04:35:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:18.299 04:35:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:18.559 Making settings changes with rpc: 00:11:18.559 04:35:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:18.559 04:35:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:18.818 Check default vs. modified settings: 00:11:18.818 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:18.818 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67467 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67467 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:19.386 Setting action_on_timeout is changed as expected. 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67467 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67467 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:19.386 Setting timeout_us is changed as expected. 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67467 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67467 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:19.386 Setting timeout_admin_us is changed as expected. 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67467 /tmp/settings_modified_67467 00:11:19.386 04:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67499 00:11:19.386 04:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 67499 ']' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 67499 00:11:19.386 04:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:11:19.386 04:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67499 00:11:19.386 04:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.386 04:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.386 04:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67499' 00:11:19.386 killing process with pid 67499 00:11:19.387 04:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 67499 00:11:19.387 04:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 67499 00:11:21.993 RPC TIMEOUT SETTING TEST PASSED. 00:11:21.993 04:35:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:21.993 00:11:21.993 real 0m5.142s 00:11:21.993 user 0m9.820s 00:11:21.993 sys 0m0.787s 00:11:21.993 04:35:11 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.993 04:35:11 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:21.993 ************************************ 00:11:21.993 END TEST nvme_rpc_timeouts 00:11:21.993 ************************************ 00:11:21.993 04:35:11 -- spdk/autotest.sh@239 -- # uname -s 00:11:21.993 04:35:11 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:21.993 04:35:11 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:21.993 04:35:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:21.993 04:35:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.993 04:35:11 -- common/autotest_common.sh@10 -- # set +x 00:11:21.993 ************************************ 00:11:21.993 START TEST sw_hotplug 00:11:21.993 ************************************ 00:11:21.993 04:35:11 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:21.993 * Looking for test storage... 00:11:21.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:21.993 04:35:11 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:21.993 04:35:11 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:11:21.993 04:35:11 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:22.252 04:35:11 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.252 04:35:11 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:22.252 04:35:11 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.252 04:35:11 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:22.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.252 --rc genhtml_branch_coverage=1 00:11:22.252 --rc genhtml_function_coverage=1 00:11:22.252 --rc genhtml_legend=1 00:11:22.252 --rc geninfo_all_blocks=1 00:11:22.252 --rc geninfo_unexecuted_blocks=1 00:11:22.252 00:11:22.252 ' 00:11:22.252 04:35:11 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:22.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.252 --rc genhtml_branch_coverage=1 00:11:22.252 --rc genhtml_function_coverage=1 00:11:22.252 --rc genhtml_legend=1 00:11:22.252 --rc geninfo_all_blocks=1 00:11:22.252 --rc geninfo_unexecuted_blocks=1 00:11:22.252 00:11:22.252 ' 00:11:22.252 04:35:11 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:22.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.252 --rc genhtml_branch_coverage=1 00:11:22.252 --rc genhtml_function_coverage=1 00:11:22.252 --rc genhtml_legend=1 00:11:22.252 --rc geninfo_all_blocks=1 00:11:22.252 --rc geninfo_unexecuted_blocks=1 00:11:22.252 00:11:22.252 ' 00:11:22.252 04:35:11 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:22.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.252 --rc genhtml_branch_coverage=1 00:11:22.252 --rc genhtml_function_coverage=1 00:11:22.252 --rc genhtml_legend=1 00:11:22.252 --rc geninfo_all_blocks=1 00:11:22.252 --rc geninfo_unexecuted_blocks=1 00:11:22.252 00:11:22.252 ' 00:11:22.252 04:35:11 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:22.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:23.078 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:23.079 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:23.079 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:23.079 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:23.079 04:35:12 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:23.079 04:35:12 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:23.079 04:35:12 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:23.079 04:35:12 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:23.079 04:35:12 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:23.079 04:35:12 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:23.079 04:35:12 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:23.079 04:35:12 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:23.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:23.906 Waiting for block devices as requested 00:11:23.906 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.165 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.165 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.425 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:29.758 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:29.758 04:35:18 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:29.758 04:35:18 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:30.017 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:30.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:30.276 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:30.535 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:30.794 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:30.794 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:31.053 04:35:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68395 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:31.053 04:35:20 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:31.053 04:35:20 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:31.053 04:35:20 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:31.053 04:35:20 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:31.053 04:35:20 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:31.053 04:35:20 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:31.311 Initializing NVMe Controllers 00:11:31.311 Attaching to 0000:00:10.0 00:11:31.311 Attaching to 0000:00:11.0 00:11:31.311 Attached to 0000:00:10.0 00:11:31.311 Attached to 0000:00:11.0 00:11:31.311 Initialization complete. Starting I/O... 00:11:31.311 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:31.311 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:31.311 00:11:32.249 QEMU NVMe Ctrl (12340 ): 1408 I/Os completed (+1408) 00:11:32.249 QEMU NVMe Ctrl (12341 ): 1410 I/Os completed (+1410) 00:11:32.249 00:11:33.276 QEMU NVMe Ctrl (12340 ): 3156 I/Os completed (+1748) 00:11:33.276 QEMU NVMe Ctrl (12341 ): 3158 I/Os completed (+1748) 00:11:33.276 00:11:34.650 QEMU NVMe Ctrl (12340 ): 4920 I/Os completed (+1764) 00:11:34.650 QEMU NVMe Ctrl (12341 ): 4922 I/Os completed (+1764) 00:11:34.650 00:11:35.585 QEMU NVMe Ctrl (12340 ): 6892 I/Os completed (+1972) 00:11:35.585 QEMU NVMe Ctrl (12341 ): 6864 I/Os completed (+1942) 00:11:35.585 00:11:36.519 QEMU NVMe Ctrl (12340 ): 9006 I/Os completed (+2114) 00:11:36.519 QEMU NVMe Ctrl (12341 ): 9007 I/Os completed (+2143) 00:11:36.519 00:11:37.086 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:37.086 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.086 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.086 [2024-10-15 04:35:26.526787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:37.086 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:37.086 [2024-10-15 04:35:26.528612] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.528730] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.528781] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.528843] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:37.086 [2024-10-15 04:35:26.531907] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.532051] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.532106] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.532207] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.086 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.086 [2024-10-15 04:35:26.567040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:37.086 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:37.086 [2024-10-15 04:35:26.568649] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.568707] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.568732] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.568753] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:37.086 [2024-10-15 04:35:26.571571] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.571628] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.571649] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.086 [2024-10-15 04:35:26.571666] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.344 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:37.344 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:37.344 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:37.344 EAL: Scan for (pci) bus failed. 00:11:37.344 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.344 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.344 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:37.344 00:11:37.344 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:37.344 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.344 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.344 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.344 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:37.344 Attaching to 0000:00:10.0 00:11:37.344 Attached to 0000:00:10.0 00:11:37.603 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:37.603 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.603 04:35:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:37.603 Attaching to 0000:00:11.0 00:11:37.603 Attached to 0000:00:11.0 00:11:38.539 QEMU NVMe Ctrl (12340 ): 2056 I/Os completed (+2056) 00:11:38.539 QEMU NVMe Ctrl (12341 ): 1800 I/Os completed (+1800) 00:11:38.539 00:11:39.475 QEMU NVMe Ctrl (12340 ): 4264 I/Os completed (+2208) 00:11:39.475 QEMU NVMe Ctrl (12341 ): 4013 I/Os completed (+2213) 00:11:39.475 00:11:40.411 QEMU NVMe Ctrl (12340 ): 6484 I/Os completed (+2220) 00:11:40.411 QEMU NVMe Ctrl (12341 ): 6233 I/Os completed (+2220) 00:11:40.411 00:11:41.390 QEMU NVMe Ctrl (12340 ): 8640 I/Os completed (+2156) 00:11:41.390 QEMU NVMe Ctrl (12341 ): 8390 I/Os completed (+2157) 00:11:41.390 00:11:42.326 QEMU NVMe Ctrl (12340 ): 10748 I/Os completed (+2108) 00:11:42.326 QEMU NVMe Ctrl (12341 ): 10499 I/Os completed (+2109) 00:11:42.326 00:11:43.262 QEMU NVMe Ctrl (12340 ): 12860 I/Os completed (+2112) 00:11:43.262 QEMU NVMe Ctrl (12341 ): 12612 I/Os completed (+2113) 00:11:43.262 00:11:44.639 QEMU NVMe Ctrl (12340 ): 14784 I/Os completed (+1924) 00:11:44.639 QEMU NVMe Ctrl (12341 ): 14546 I/Os completed (+1934) 00:11:44.639 00:11:45.576 QEMU NVMe Ctrl (12340 ): 16670 I/Os completed (+1886) 00:11:45.576 QEMU NVMe Ctrl (12341 ): 16443 I/Os completed (+1897) 00:11:45.576 00:11:46.512 QEMU NVMe Ctrl (12340 ): 18530 I/Os completed (+1860) 00:11:46.512 QEMU NVMe Ctrl (12341 ): 18307 I/Os completed (+1864) 00:11:46.512 00:11:47.450 QEMU NVMe Ctrl (12340 ): 20534 I/Os completed (+2004) 00:11:47.450 QEMU NVMe Ctrl (12341 ): 20313 I/Os completed (+2006) 00:11:47.450 00:11:48.387 QEMU NVMe Ctrl (12340 ): 22730 I/Os completed (+2196) 00:11:48.387 QEMU NVMe Ctrl (12341 ): 22509 I/Os completed (+2196) 00:11:48.387 00:11:49.341 QEMU NVMe Ctrl (12340 ): 24822 I/Os completed (+2092) 00:11:49.341 QEMU NVMe Ctrl (12341 ): 24609 I/Os completed (+2100) 00:11:49.341 00:11:49.607 04:35:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:49.607 04:35:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.607 04:35:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.607 04:35:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.607 [2024-10-15 04:35:38.941669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:49.607 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:49.607 [2024-10-15 04:35:38.943704] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.943906] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.943978] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.944191] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:49.607 [2024-10-15 04:35:38.947618] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.947767] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.947834] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.947967] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 04:35:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.607 04:35:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.607 [2024-10-15 04:35:38.983309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:49.607 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:49.607 [2024-10-15 04:35:38.985156] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.985367] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.985403] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.985424] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:49.607 [2024-10-15 04:35:38.988402] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.988545] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.988601] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 [2024-10-15 04:35:38.988737] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.607 04:35:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:49.607 04:35:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.607 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:49.607 EAL: Scan for (pci) bus failed. 00:11:49.607 04:35:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.607 04:35:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.607 04:35:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.874 04:35:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.874 04:35:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.874 04:35:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.874 04:35:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.874 04:35:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.874 Attaching to 0000:00:10.0 00:11:49.874 Attached to 0000:00:10.0 00:11:49.874 04:35:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.874 04:35:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.874 04:35:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.874 Attaching to 0000:00:11.0 00:11:49.874 Attached to 0000:00:11.0 00:11:50.493 QEMU NVMe Ctrl (12340 ): 1112 I/Os completed (+1112) 00:11:50.493 QEMU NVMe Ctrl (12341 ): 872 I/Os completed (+872) 00:11:50.493 00:11:51.427 QEMU NVMe Ctrl (12340 ): 3251 I/Os completed (+2139) 00:11:51.427 QEMU NVMe Ctrl (12341 ): 3047 I/Os completed (+2175) 00:11:51.427 00:11:52.362 QEMU NVMe Ctrl (12340 ): 5434 I/Os completed (+2183) 00:11:52.362 QEMU NVMe Ctrl (12341 ): 5227 I/Os completed (+2180) 00:11:52.362 00:11:53.297 QEMU NVMe Ctrl (12340 ): 7618 I/Os completed (+2184) 00:11:53.297 QEMU NVMe Ctrl (12341 ): 7411 I/Os completed (+2184) 00:11:53.297 00:11:54.232 QEMU NVMe Ctrl (12340 ): 9546 I/Os completed (+1928) 00:11:54.232 QEMU NVMe Ctrl (12341 ): 9339 I/Os completed (+1928) 00:11:54.232 00:11:55.608 QEMU NVMe Ctrl (12340 ): 11470 I/Os completed (+1924) 00:11:55.608 QEMU NVMe Ctrl (12341 ): 11263 I/Os completed (+1924) 00:11:55.608 00:11:56.544 QEMU NVMe Ctrl (12340 ): 13230 I/Os completed (+1760) 00:11:56.544 QEMU NVMe Ctrl (12341 ): 13023 I/Os completed (+1760) 00:11:56.544 00:11:57.479 QEMU NVMe Ctrl (12340 ): 15105 I/Os completed (+1875) 00:11:57.479 QEMU NVMe Ctrl (12341 ): 14898 I/Os completed (+1875) 00:11:57.479 00:11:58.420 QEMU NVMe Ctrl (12340 ): 17017 I/Os completed (+1912) 00:11:58.420 QEMU NVMe Ctrl (12341 ): 16810 I/Os completed (+1912) 00:11:58.420 00:11:59.353 QEMU NVMe Ctrl (12340 ): 19121 I/Os completed (+2104) 00:11:59.353 QEMU NVMe Ctrl (12341 ): 18914 I/Os completed (+2104) 00:11:59.353 00:12:00.286 QEMU NVMe Ctrl (12340 ): 21097 I/Os completed (+1976) 00:12:00.286 QEMU NVMe Ctrl (12341 ): 20892 I/Os completed (+1978) 00:12:00.286 00:12:01.222 QEMU NVMe Ctrl (12340 ): 22977 I/Os completed (+1880) 00:12:01.222 QEMU NVMe Ctrl (12341 ): 22783 I/Os completed (+1891) 00:12:01.222 00:12:01.822 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:01.822 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:01.822 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.822 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.080 [2024-10-15 04:35:51.331496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:02.080 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:02.080 [2024-10-15 04:35:51.333823] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.333899] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.333924] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.333948] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:02.080 [2024-10-15 04:35:51.337362] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.337425] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.337449] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.337475] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:02.080 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.080 [2024-10-15 04:35:51.369693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:02.080 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:02.080 [2024-10-15 04:35:51.371787] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.371858] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.371885] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.371904] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:02.080 [2024-10-15 04:35:51.374749] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.374804] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.374850] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 [2024-10-15 04:35:51.374871] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.080 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:02.080 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:02.080 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:02.080 EAL: Scan for (pci) bus failed. 00:12:02.080 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.080 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.080 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:02.339 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:02.339 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.339 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.339 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.339 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:02.339 Attaching to 0000:00:10.0 00:12:02.339 Attached to 0000:00:10.0 00:12:02.339 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:02.339 QEMU NVMe Ctrl (12340 ): 180 I/Os completed (+180) 00:12:02.339 00:12:02.339 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.339 04:35:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:02.339 Attaching to 0000:00:11.0 00:12:02.339 Attached to 0000:00:11.0 00:12:02.339 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:02.339 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:02.339 [2024-10-15 04:35:51.722256] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:14.546 04:36:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:14.546 04:36:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.546 04:36:03 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.20 00:12:14.546 04:36:03 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.20 00:12:14.546 04:36:03 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:14.546 04:36:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.20 00:12:14.546 04:36:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.20 2 00:12:14.546 remove_attach_helper took 43.20s to complete (handling 2 nvme drive(s)) 04:36:03 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:21.144 04:36:09 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68395 00:12:21.144 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68395) - No such process 00:12:21.144 04:36:09 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68395 00:12:21.144 04:36:09 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:21.144 04:36:09 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:21.144 04:36:09 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:21.144 04:36:09 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68944 00:12:21.144 04:36:09 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:21.144 04:36:09 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:21.144 04:36:09 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68944 00:12:21.144 04:36:09 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 68944 ']' 00:12:21.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.144 04:36:09 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.144 04:36:09 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.144 04:36:09 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.144 04:36:09 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.144 04:36:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.144 [2024-10-15 04:36:09.863552] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:12:21.144 [2024-10-15 04:36:09.864028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68944 ] 00:12:21.144 [2024-10-15 04:36:10.053147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.144 [2024-10-15 04:36:10.182038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.712 04:36:11 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:21.712 04:36:11 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:12:21.712 04:36:11 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:21.712 04:36:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.712 04:36:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.712 04:36:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.712 04:36:11 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:21.712 04:36:11 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:21.712 04:36:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:21.712 04:36:11 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:21.712 04:36:11 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:21.712 04:36:11 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:21.712 04:36:11 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:21.712 04:36:11 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:21.712 04:36:11 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:21.712 04:36:11 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:21.712 04:36:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:21.712 04:36:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:21.712 04:36:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.316 04:36:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.316 04:36:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.316 [2024-10-15 04:36:17.227300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:28.316 [2024-10-15 04:36:17.230102] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.316 [2024-10-15 04:36:17.230154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.316 [2024-10-15 04:36:17.230173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.316 [2024-10-15 04:36:17.230202] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.316 [2024-10-15 04:36:17.230215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.316 [2024-10-15 04:36:17.230232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.316 [2024-10-15 04:36:17.230246] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.316 [2024-10-15 04:36:17.230262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.316 [2024-10-15 04:36:17.230288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.316 [2024-10-15 04:36:17.230307] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.316 [2024-10-15 04:36:17.230319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.316 [2024-10-15 04:36:17.230334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.316 04:36:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:28.316 [2024-10-15 04:36:17.726507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:28.316 [2024-10-15 04:36:17.729054] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.316 [2024-10-15 04:36:17.729235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.316 [2024-10-15 04:36:17.729268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.316 [2024-10-15 04:36:17.729294] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.316 [2024-10-15 04:36:17.729309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.316 [2024-10-15 04:36:17.729322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.316 [2024-10-15 04:36:17.729338] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.316 [2024-10-15 04:36:17.729349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.316 [2024-10-15 04:36:17.729364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.316 [2024-10-15 04:36:17.729377] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.316 [2024-10-15 04:36:17.729390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.316 [2024-10-15 04:36:17.729402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.316 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:28.317 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:28.317 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:28.317 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.317 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.317 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.317 04:36:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.317 04:36:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.317 04:36:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.317 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:28.317 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:28.575 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:28.575 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:28.575 04:36:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:28.575 04:36:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:28.575 04:36:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:28.575 04:36:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:28.575 04:36:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:28.575 04:36:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:28.833 04:36:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:28.833 04:36:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:28.833 04:36:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.055 04:36:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.055 04:36:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.055 04:36:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.055 [2024-10-15 04:36:30.306327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:41.055 [2024-10-15 04:36:30.309139] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.055 [2024-10-15 04:36:30.309205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.055 [2024-10-15 04:36:30.309226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.055 [2024-10-15 04:36:30.309254] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.055 [2024-10-15 04:36:30.309267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.055 [2024-10-15 04:36:30.309283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.055 [2024-10-15 04:36:30.309296] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.055 [2024-10-15 04:36:30.309323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.055 [2024-10-15 04:36:30.309335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.055 [2024-10-15 04:36:30.309350] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.055 [2024-10-15 04:36:30.309378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.055 [2024-10-15 04:36:30.309393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.055 04:36:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.055 04:36:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.055 04:36:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:41.055 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:41.314 [2024-10-15 04:36:30.705649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:41.314 [2024-10-15 04:36:30.708314] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.314 [2024-10-15 04:36:30.708362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.314 [2024-10-15 04:36:30.708386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.314 [2024-10-15 04:36:30.708409] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.314 [2024-10-15 04:36:30.708440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.314 [2024-10-15 04:36:30.708453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.314 [2024-10-15 04:36:30.708469] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.314 [2024-10-15 04:36:30.708481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.314 [2024-10-15 04:36:30.708497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.314 [2024-10-15 04:36:30.708511] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.314 [2024-10-15 04:36:30.708525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.314 [2024-10-15 04:36:30.708537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.314 [2024-10-15 04:36:30.708558] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:41.314 [2024-10-15 04:36:30.708571] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:41.314 [2024-10-15 04:36:30.708586] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:41.314 [2024-10-15 04:36:30.708597] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:41.573 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:41.573 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.573 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.573 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.573 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.573 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.573 04:36:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.573 04:36:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.573 04:36:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.573 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:41.573 04:36:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:41.573 04:36:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.573 04:36:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.573 04:36:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:41.832 04:36:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:41.832 04:36:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.832 04:36:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.832 04:36:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.832 04:36:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:41.832 04:36:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:41.832 04:36:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.832 04:36:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:54.059 04:36:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.059 04:36:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.059 04:36:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:54.059 [2024-10-15 04:36:43.385325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:54.059 [2024-10-15 04:36:43.388519] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.059 [2024-10-15 04:36:43.388575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.059 [2024-10-15 04:36:43.388593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.059 [2024-10-15 04:36:43.388620] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.059 [2024-10-15 04:36:43.388633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.059 [2024-10-15 04:36:43.388648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.059 [2024-10-15 04:36:43.388662] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.059 [2024-10-15 04:36:43.388686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.059 [2024-10-15 04:36:43.388699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.059 [2024-10-15 04:36:43.388715] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.059 [2024-10-15 04:36:43.388727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.059 [2024-10-15 04:36:43.388742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:54.059 04:36:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.059 04:36:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.059 04:36:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:54.059 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:54.332 [2024-10-15 04:36:43.784696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:54.332 [2024-10-15 04:36:43.787681] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.332 [2024-10-15 04:36:43.787725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.332 [2024-10-15 04:36:43.787745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.332 [2024-10-15 04:36:43.787768] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.332 [2024-10-15 04:36:43.787786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.332 [2024-10-15 04:36:43.787799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.332 [2024-10-15 04:36:43.787823] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.332 [2024-10-15 04:36:43.787835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.332 [2024-10-15 04:36:43.787850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.332 [2024-10-15 04:36:43.787862] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.332 [2024-10-15 04:36:43.787876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.332 [2024-10-15 04:36:43.787887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.332 [2024-10-15 04:36:43.787907] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:54.332 [2024-10-15 04:36:43.787920] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:54.332 [2024-10-15 04:36:43.787933] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:54.332 [2024-10-15 04:36:43.787944] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:12:54.590 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:54.590 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:54.590 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:54.590 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:54.590 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:54.590 04:36:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:54.590 04:36:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.590 04:36:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.590 04:36:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.590 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:54.590 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:54.848 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:54.848 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:54.848 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:54.848 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:54.848 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:54.848 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:54.848 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:54.848 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:55.106 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:55.106 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:55.106 04:36:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.30 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.30 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.30 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.30 2 00:13:07.315 remove_attach_helper took 45.30s to complete (handling 2 nvme drive(s)) 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:07.315 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:07.315 04:36:56 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:07.316 04:36:56 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:13:07.316 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:07.316 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:07.316 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:07.316 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:07.316 04:36:56 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:13.994 04:37:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.994 04:37:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.994 [2024-10-15 04:37:02.564572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:13.994 [2024-10-15 04:37:02.567322] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.994 [2024-10-15 04:37:02.567493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.994 [2024-10-15 04:37:02.567655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.994 [2024-10-15 04:37:02.567802] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.994 [2024-10-15 04:37:02.567862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.994 [2024-10-15 04:37:02.567990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.994 [2024-10-15 04:37:02.568051] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.994 [2024-10-15 04:37:02.568134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.994 [2024-10-15 04:37:02.568234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.994 [2024-10-15 04:37:02.568375] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.994 [2024-10-15 04:37:02.568414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.994 [2024-10-15 04:37:02.568519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.994 04:37:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:13.994 04:37:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:13.994 [2024-10-15 04:37:02.963946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:13.994 [2024-10-15 04:37:02.965888] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.994 [2024-10-15 04:37:02.965944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.994 [2024-10-15 04:37:02.965965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.994 [2024-10-15 04:37:02.965989] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.994 [2024-10-15 04:37:02.966003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.994 [2024-10-15 04:37:02.966016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.994 [2024-10-15 04:37:02.966036] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.994 [2024-10-15 04:37:02.966048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.994 [2024-10-15 04:37:02.966063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.994 [2024-10-15 04:37:02.966077] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:13.994 [2024-10-15 04:37:02.966108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.994 [2024-10-15 04:37:02.966121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.994 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:13.994 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:13.994 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:13.994 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:13.994 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:13.994 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:13.994 04:37:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.994 04:37:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.994 04:37:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.994 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:13.994 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:13.994 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:13.995 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:13.995 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:13.995 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:13.995 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:13.995 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:13.995 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:13.995 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:13.995 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:13.995 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:13.995 04:37:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:26.257 04:37:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.257 04:37:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:26.257 04:37:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:26.257 [2024-10-15 04:37:15.543798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:26.257 [2024-10-15 04:37:15.549489] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.257 [2024-10-15 04:37:15.549674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.257 [2024-10-15 04:37:15.549804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.257 [2024-10-15 04:37:15.550013] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.257 [2024-10-15 04:37:15.550056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.257 [2024-10-15 04:37:15.550198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.257 [2024-10-15 04:37:15.550260] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.257 [2024-10-15 04:37:15.550344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.257 [2024-10-15 04:37:15.550402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.257 [2024-10-15 04:37:15.550515] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.257 [2024-10-15 04:37:15.550556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.257 [2024-10-15 04:37:15.550660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:26.257 04:37:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.257 04:37:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:26.257 04:37:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:26.257 04:37:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:26.825 [2024-10-15 04:37:16.043019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:26.825 [2024-10-15 04:37:16.045201] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.825 [2024-10-15 04:37:16.045380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.825 [2024-10-15 04:37:16.045567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.825 [2024-10-15 04:37:16.045700] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.825 [2024-10-15 04:37:16.045745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.825 [2024-10-15 04:37:16.045895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.825 [2024-10-15 04:37:16.045965] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.825 [2024-10-15 04:37:16.046112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.825 [2024-10-15 04:37:16.046294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.825 [2024-10-15 04:37:16.046356] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.825 [2024-10-15 04:37:16.046447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:26.825 [2024-10-15 04:37:16.046505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:26.825 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:26.825 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:26.825 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:26.825 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:26.825 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:26.825 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:26.825 04:37:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.825 04:37:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:26.825 04:37:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.825 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:26.825 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:26.825 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:26.825 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:26.825 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:27.084 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:27.084 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:27.084 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:27.084 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:27.084 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:27.084 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:27.084 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:27.084 04:37:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:39.317 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:39.317 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:39.317 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:39.317 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:39.317 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:39.317 04:37:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.317 04:37:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:39.317 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:39.318 04:37:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:39.318 [2024-10-15 04:37:28.622833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:39.318 [2024-10-15 04:37:28.628043] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.318 [2024-10-15 04:37:28.628223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.318 [2024-10-15 04:37:28.628349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.318 [2024-10-15 04:37:28.628484] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.318 [2024-10-15 04:37:28.628612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.318 [2024-10-15 04:37:28.628747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.318 [2024-10-15 04:37:28.628868] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.318 [2024-10-15 04:37:28.628912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.318 [2024-10-15 04:37:28.629009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.318 [2024-10-15 04:37:28.629072] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.318 [2024-10-15 04:37:28.629130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.318 [2024-10-15 04:37:28.629325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:39.318 04:37:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.318 04:37:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:39.318 04:37:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:39.318 04:37:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:39.886 [2024-10-15 04:37:29.122027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:39.886 [2024-10-15 04:37:29.124214] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.886 [2024-10-15 04:37:29.124265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.886 [2024-10-15 04:37:29.124298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.886 [2024-10-15 04:37:29.124321] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.886 [2024-10-15 04:37:29.124337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.886 [2024-10-15 04:37:29.124350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.886 [2024-10-15 04:37:29.124367] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.886 [2024-10-15 04:37:29.124379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.886 [2024-10-15 04:37:29.124399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.886 [2024-10-15 04:37:29.124413] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.886 [2024-10-15 04:37:29.124428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:39.886 [2024-10-15 04:37:29.124441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:39.886 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:39.886 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:39.886 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:39.886 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:39.886 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:39.886 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:39.886 04:37:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.886 04:37:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:39.886 04:37:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.886 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:39.886 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:39.886 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:39.886 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:39.886 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:40.146 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:40.146 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:40.146 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:40.146 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:40.146 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:40.146 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:40.146 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:40.146 04:37:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:52.356 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:52.356 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:52.356 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:52.356 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:52.356 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.356 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.356 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:52.356 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.17 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.17 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:52.356 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:13:52.356 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:13:52.356 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:52.356 04:37:41 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68944 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 68944 ']' 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 68944 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68944 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68944' 00:13:52.356 killing process with pid 68944 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@969 -- # kill 68944 00:13:52.356 04:37:41 sw_hotplug -- common/autotest_common.sh@974 -- # wait 68944 00:13:54.888 04:37:44 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:55.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:56.023 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:56.023 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:56.023 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.281 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.281 00:13:56.281 real 2m34.328s 00:13:56.281 user 1m51.744s 00:13:56.281 sys 0m22.996s 00:13:56.281 04:37:45 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.281 04:37:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:56.281 ************************************ 00:13:56.281 END TEST sw_hotplug 00:13:56.281 ************************************ 00:13:56.281 04:37:45 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:56.281 04:37:45 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:56.281 04:37:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:56.281 04:37:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.281 04:37:45 -- common/autotest_common.sh@10 -- # set +x 00:13:56.281 ************************************ 00:13:56.281 START TEST nvme_xnvme 00:13:56.281 ************************************ 00:13:56.281 04:37:45 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:56.574 * Looking for test storage... 00:13:56.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:56.574 04:37:45 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:56.574 04:37:45 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:13:56.574 04:37:45 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:56.574 04:37:45 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.574 04:37:45 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:56.575 04:37:45 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.575 04:37:45 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:56.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.575 --rc genhtml_branch_coverage=1 00:13:56.575 --rc genhtml_function_coverage=1 00:13:56.575 --rc genhtml_legend=1 00:13:56.575 --rc geninfo_all_blocks=1 00:13:56.575 --rc geninfo_unexecuted_blocks=1 00:13:56.575 00:13:56.575 ' 00:13:56.575 04:37:45 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:56.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.575 --rc genhtml_branch_coverage=1 00:13:56.575 --rc genhtml_function_coverage=1 00:13:56.575 --rc genhtml_legend=1 00:13:56.575 --rc geninfo_all_blocks=1 00:13:56.575 --rc geninfo_unexecuted_blocks=1 00:13:56.575 00:13:56.575 ' 00:13:56.575 04:37:45 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:56.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.575 --rc genhtml_branch_coverage=1 00:13:56.575 --rc genhtml_function_coverage=1 00:13:56.575 --rc genhtml_legend=1 00:13:56.575 --rc geninfo_all_blocks=1 00:13:56.575 --rc geninfo_unexecuted_blocks=1 00:13:56.575 00:13:56.575 ' 00:13:56.575 04:37:45 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:56.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.575 --rc genhtml_branch_coverage=1 00:13:56.575 --rc genhtml_function_coverage=1 00:13:56.575 --rc genhtml_legend=1 00:13:56.575 --rc geninfo_all_blocks=1 00:13:56.575 --rc geninfo_unexecuted_blocks=1 00:13:56.575 00:13:56.575 ' 00:13:56.575 04:37:45 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.575 04:37:45 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.575 04:37:45 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.575 04:37:45 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.575 04:37:45 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.575 04:37:45 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:56.575 04:37:45 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.575 04:37:45 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:56.575 04:37:45 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:56.575 04:37:45 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.575 04:37:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.575 ************************************ 00:13:56.575 START TEST xnvme_to_malloc_dd_copy 00:13:56.575 ************************************ 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:56.575 04:37:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:56.575 04:37:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:56.575 { 00:13:56.575 "subsystems": [ 00:13:56.575 { 00:13:56.575 "subsystem": "bdev", 00:13:56.575 "config": [ 00:13:56.575 { 00:13:56.575 "params": { 00:13:56.575 "block_size": 512, 00:13:56.575 "num_blocks": 2097152, 00:13:56.575 "name": "malloc0" 00:13:56.575 }, 00:13:56.575 "method": "bdev_malloc_create" 00:13:56.575 }, 00:13:56.575 { 00:13:56.575 "params": { 00:13:56.575 "io_mechanism": "libaio", 00:13:56.575 "filename": "/dev/nullb0", 00:13:56.575 "name": "null0" 00:13:56.575 }, 00:13:56.575 "method": "bdev_xnvme_create" 00:13:56.575 }, 00:13:56.575 { 00:13:56.575 "method": "bdev_wait_for_examine" 00:13:56.575 } 00:13:56.575 ] 00:13:56.575 } 00:13:56.575 ] 00:13:56.575 } 00:13:56.849 [2024-10-15 04:37:46.102972] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:13:56.849 [2024-10-15 04:37:46.103326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70316 ] 00:13:56.849 [2024-10-15 04:37:46.278020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.107 [2024-10-15 04:37:46.403791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.641  [2024-10-15T04:37:50.080Z] Copying: 232/1024 [MB] (232 MBps) [2024-10-15T04:37:51.016Z] Copying: 467/1024 [MB] (235 MBps) [2024-10-15T04:37:51.951Z] Copying: 704/1024 [MB] (236 MBps) [2024-10-15T04:37:52.518Z] Copying: 936/1024 [MB] (232 MBps) [2024-10-15T04:37:56.707Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:14:07.203 00:14:07.203 04:37:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:07.203 04:37:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:07.203 04:37:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:07.203 04:37:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:07.203 { 00:14:07.203 "subsystems": [ 00:14:07.203 { 00:14:07.203 "subsystem": "bdev", 00:14:07.203 "config": [ 00:14:07.203 { 00:14:07.203 "params": { 00:14:07.203 "block_size": 512, 00:14:07.203 "num_blocks": 2097152, 00:14:07.203 "name": "malloc0" 00:14:07.203 }, 00:14:07.203 "method": "bdev_malloc_create" 00:14:07.203 }, 00:14:07.203 { 00:14:07.203 "params": { 00:14:07.203 "io_mechanism": "libaio", 00:14:07.203 "filename": "/dev/nullb0", 00:14:07.203 "name": "null0" 00:14:07.203 }, 00:14:07.203 "method": "bdev_xnvme_create" 00:14:07.203 }, 00:14:07.203 { 00:14:07.203 "method": "bdev_wait_for_examine" 00:14:07.203 } 00:14:07.203 ] 00:14:07.203 } 00:14:07.203 ] 00:14:07.203 } 00:14:07.462 [2024-10-15 04:37:56.759765] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:14:07.462 [2024-10-15 04:37:56.760254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70431 ] 00:14:07.462 [2024-10-15 04:37:56.946771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.719 [2024-10-15 04:37:57.074077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.251  [2024-10-15T04:38:00.692Z] Copying: 234/1024 [MB] (234 MBps) [2024-10-15T04:38:01.642Z] Copying: 470/1024 [MB] (236 MBps) [2024-10-15T04:38:03.017Z] Copying: 706/1024 [MB] (235 MBps) [2024-10-15T04:38:03.018Z] Copying: 956/1024 [MB] (250 MBps) [2024-10-15T04:38:07.211Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:14:17.707 00:14:17.707 04:38:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:17.707 04:38:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:17.707 04:38:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:17.707 04:38:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:17.707 04:38:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:17.707 04:38:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:17.707 { 00:14:17.707 "subsystems": [ 00:14:17.707 { 00:14:17.707 "subsystem": "bdev", 00:14:17.707 "config": [ 00:14:17.707 { 00:14:17.707 "params": { 00:14:17.707 "block_size": 512, 00:14:17.707 "num_blocks": 2097152, 00:14:17.707 "name": "malloc0" 00:14:17.707 }, 00:14:17.707 "method": "bdev_malloc_create" 00:14:17.707 }, 00:14:17.707 { 00:14:17.707 "params": { 00:14:17.707 "io_mechanism": "io_uring", 00:14:17.707 "filename": "/dev/nullb0", 00:14:17.707 "name": "null0" 00:14:17.707 }, 00:14:17.707 "method": "bdev_xnvme_create" 00:14:17.707 }, 00:14:17.707 { 00:14:17.707 "method": "bdev_wait_for_examine" 00:14:17.707 } 00:14:17.707 ] 00:14:17.707 } 00:14:17.707 ] 00:14:17.707 } 00:14:17.966 [2024-10-15 04:38:07.241090] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:14:17.966 [2024-10-15 04:38:07.241227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70548 ] 00:14:17.966 [2024-10-15 04:38:07.418043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.225 [2024-10-15 04:38:07.553129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.761  [2024-10-15T04:38:11.202Z] Copying: 236/1024 [MB] (236 MBps) [2024-10-15T04:38:12.140Z] Copying: 477/1024 [MB] (241 MBps) [2024-10-15T04:38:13.517Z] Copying: 742/1024 [MB] (264 MBps) [2024-10-15T04:38:13.517Z] Copying: 993/1024 [MB] (250 MBps) [2024-10-15T04:38:17.707Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:14:28.203 00:14:28.203 04:38:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:28.203 04:38:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:28.203 04:38:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:28.203 04:38:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:28.203 { 00:14:28.203 "subsystems": [ 00:14:28.203 { 00:14:28.203 "subsystem": "bdev", 00:14:28.203 "config": [ 00:14:28.203 { 00:14:28.203 "params": { 00:14:28.203 "block_size": 512, 00:14:28.203 "num_blocks": 2097152, 00:14:28.203 "name": "malloc0" 00:14:28.203 }, 00:14:28.203 "method": "bdev_malloc_create" 00:14:28.203 }, 00:14:28.203 { 00:14:28.203 "params": { 00:14:28.203 "io_mechanism": "io_uring", 00:14:28.203 "filename": "/dev/nullb0", 00:14:28.203 "name": "null0" 00:14:28.203 }, 00:14:28.203 "method": "bdev_xnvme_create" 00:14:28.203 }, 00:14:28.203 { 00:14:28.203 "method": "bdev_wait_for_examine" 00:14:28.203 } 00:14:28.203 ] 00:14:28.203 } 00:14:28.203 ] 00:14:28.203 } 00:14:28.203 [2024-10-15 04:38:17.652232] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:14:28.203 [2024-10-15 04:38:17.652375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70674 ] 00:14:28.463 [2024-10-15 04:38:17.830042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.463 [2024-10-15 04:38:17.967114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.084  [2024-10-15T04:38:21.966Z] Copying: 238/1024 [MB] (238 MBps) [2024-10-15T04:38:22.905Z] Copying: 477/1024 [MB] (238 MBps) [2024-10-15T04:38:23.899Z] Copying: 721/1024 [MB] (243 MBps) [2024-10-15T04:38:23.899Z] Copying: 966/1024 [MB] (244 MBps) [2024-10-15T04:38:28.107Z] Copying: 1024/1024 [MB] (average 242 MBps) 00:14:38.603 00:14:38.603 04:38:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:38.603 04:38:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:38.862 ************************************ 00:14:38.862 END TEST xnvme_to_malloc_dd_copy 00:14:38.862 ************************************ 00:14:38.862 00:14:38.862 real 0m42.147s 00:14:38.862 user 0m36.987s 00:14:38.862 sys 0m4.601s 00:14:38.862 04:38:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:38.862 04:38:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:38.862 04:38:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:38.862 04:38:28 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:38.862 04:38:28 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:38.862 04:38:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:38.862 ************************************ 00:14:38.862 START TEST xnvme_bdevperf 00:14:38.862 ************************************ 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:38.862 04:38:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:38.862 { 00:14:38.862 "subsystems": [ 00:14:38.862 { 00:14:38.862 "subsystem": "bdev", 00:14:38.862 "config": [ 00:14:38.862 { 00:14:38.862 "params": { 00:14:38.862 "io_mechanism": "libaio", 00:14:38.862 "filename": "/dev/nullb0", 00:14:38.862 "name": "null0" 00:14:38.862 }, 00:14:38.862 "method": "bdev_xnvme_create" 00:14:38.862 }, 00:14:38.862 { 00:14:38.862 "method": "bdev_wait_for_examine" 00:14:38.862 } 00:14:38.862 ] 00:14:38.862 } 00:14:38.862 ] 00:14:38.862 } 00:14:38.862 [2024-10-15 04:38:28.324122] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:14:38.862 [2024-10-15 04:38:28.324260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70818 ] 00:14:39.121 [2024-10-15 04:38:28.500806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.379 [2024-10-15 04:38:28.632119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.638 Running I/O for 5 seconds... 00:14:41.948 136384.00 IOPS, 532.75 MiB/s [2024-10-15T04:38:32.387Z] 137984.00 IOPS, 539.00 MiB/s [2024-10-15T04:38:33.346Z] 140288.00 IOPS, 548.00 MiB/s [2024-10-15T04:38:34.281Z] 139776.00 IOPS, 546.00 MiB/s [2024-10-15T04:38:34.281Z] 140032.00 IOPS, 547.00 MiB/s 00:14:44.777 Latency(us) 00:14:44.777 [2024-10-15T04:38:34.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.777 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:44.777 null0 : 5.00 139983.91 546.81 0.00 0.00 454.58 416.18 2013.46 00:14:44.777 [2024-10-15T04:38:34.281Z] =================================================================================================================== 00:14:44.777 [2024-10-15T04:38:34.281Z] Total : 139983.91 546.81 0.00 0.00 454.58 416.18 2013.46 00:14:46.156 04:38:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:46.156 04:38:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:46.156 04:38:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:46.156 04:38:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:46.156 04:38:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:46.156 04:38:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:46.156 { 00:14:46.156 "subsystems": [ 00:14:46.156 { 00:14:46.156 "subsystem": "bdev", 00:14:46.156 "config": [ 00:14:46.156 { 00:14:46.156 "params": { 00:14:46.156 "io_mechanism": "io_uring", 00:14:46.156 "filename": "/dev/nullb0", 00:14:46.156 "name": "null0" 00:14:46.156 }, 00:14:46.156 "method": "bdev_xnvme_create" 00:14:46.156 }, 00:14:46.156 { 00:14:46.156 "method": "bdev_wait_for_examine" 00:14:46.156 } 00:14:46.156 ] 00:14:46.156 } 00:14:46.156 ] 00:14:46.156 } 00:14:46.156 [2024-10-15 04:38:35.365724] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:14:46.156 [2024-10-15 04:38:35.366068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70892 ] 00:14:46.156 [2024-10-15 04:38:35.540996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.416 [2024-10-15 04:38:35.669707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.752 Running I/O for 5 seconds... 00:14:48.644 179072.00 IOPS, 699.50 MiB/s [2024-10-15T04:38:39.086Z] 178976.00 IOPS, 699.12 MiB/s [2024-10-15T04:38:40.462Z] 178794.67 IOPS, 698.42 MiB/s [2024-10-15T04:38:41.401Z] 178864.00 IOPS, 698.69 MiB/s 00:14:51.897 Latency(us) 00:14:51.897 [2024-10-15T04:38:41.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.897 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:51.897 null0 : 5.00 178974.62 699.12 0.00 0.00 354.97 213.85 1908.18 00:14:51.897 [2024-10-15T04:38:41.401Z] =================================================================================================================== 00:14:51.897 [2024-10-15T04:38:41.401Z] Total : 178974.62 699.12 0.00 0.00 354.97 213.85 1908.18 00:14:52.836 04:38:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:52.836 04:38:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:52.836 ************************************ 00:14:52.836 END TEST xnvme_bdevperf 00:14:52.836 ************************************ 00:14:52.836 00:14:52.836 real 0m14.139s 00:14:52.836 user 0m10.523s 00:14:52.836 sys 0m3.384s 00:14:52.836 04:38:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:52.836 04:38:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:53.096 ************************************ 00:14:53.096 END TEST nvme_xnvme 00:14:53.096 ************************************ 00:14:53.096 00:14:53.096 real 0m56.672s 00:14:53.096 user 0m47.706s 00:14:53.096 sys 0m8.185s 00:14:53.096 04:38:42 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.096 04:38:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.096 04:38:42 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:53.096 04:38:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:53.096 04:38:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.096 04:38:42 -- common/autotest_common.sh@10 -- # set +x 00:14:53.096 ************************************ 00:14:53.096 START TEST blockdev_xnvme 00:14:53.096 ************************************ 00:14:53.096 04:38:42 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:53.096 * Looking for test storage... 00:14:53.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:53.096 04:38:42 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:53.096 04:38:42 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:14:53.096 04:38:42 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:53.357 04:38:42 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.357 04:38:42 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:53.357 04:38:42 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.357 04:38:42 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:53.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.357 --rc genhtml_branch_coverage=1 00:14:53.357 --rc genhtml_function_coverage=1 00:14:53.357 --rc genhtml_legend=1 00:14:53.357 --rc geninfo_all_blocks=1 00:14:53.357 --rc geninfo_unexecuted_blocks=1 00:14:53.357 00:14:53.357 ' 00:14:53.357 04:38:42 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:53.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.357 --rc genhtml_branch_coverage=1 00:14:53.357 --rc genhtml_function_coverage=1 00:14:53.357 --rc genhtml_legend=1 00:14:53.357 --rc geninfo_all_blocks=1 00:14:53.357 --rc geninfo_unexecuted_blocks=1 00:14:53.357 00:14:53.357 ' 00:14:53.357 04:38:42 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:53.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.357 --rc genhtml_branch_coverage=1 00:14:53.357 --rc genhtml_function_coverage=1 00:14:53.357 --rc genhtml_legend=1 00:14:53.357 --rc geninfo_all_blocks=1 00:14:53.357 --rc geninfo_unexecuted_blocks=1 00:14:53.357 00:14:53.357 ' 00:14:53.357 04:38:42 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:53.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.357 --rc genhtml_branch_coverage=1 00:14:53.357 --rc genhtml_function_coverage=1 00:14:53.357 --rc genhtml_legend=1 00:14:53.357 --rc geninfo_all_blocks=1 00:14:53.357 --rc geninfo_unexecuted_blocks=1 00:14:53.357 00:14:53.357 ' 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:53.357 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:53.358 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71051 00:14:53.358 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:53.358 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:53.358 04:38:42 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71051 00:14:53.358 04:38:42 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 71051 ']' 00:14:53.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.358 04:38:42 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.358 04:38:42 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.358 04:38:42 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.358 04:38:42 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.358 04:38:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.358 [2024-10-15 04:38:42.804626] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:14:53.358 [2024-10-15 04:38:42.805049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71051 ] 00:14:53.617 [2024-10-15 04:38:42.982996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.617 [2024-10-15 04:38:43.111652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.997 04:38:44 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.997 04:38:44 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:14:54.997 04:38:44 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:54.997 04:38:44 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:54.997 04:38:44 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:54.997 04:38:44 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:54.997 04:38:44 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:55.255 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:55.515 Waiting for block devices as requested 00:14:55.515 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:55.774 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:55.774 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:56.033 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:01.308 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:15:01.308 nvme0n1 00:15:01.308 nvme1n1 00:15:01.308 nvme2n1 00:15:01.308 nvme2n2 00:15:01.308 nvme2n3 00:15:01.308 nvme3n1 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.308 04:38:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:01.308 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:01.309 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d8cb1ea1-c384-4bd9-98ea-e3c7b1981654"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d8cb1ea1-c384-4bd9-98ea-e3c7b1981654",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8d6880c0-71a5-45e3-86f1-88c2fd248687"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8d6880c0-71a5-45e3-86f1-88c2fd248687",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7c26861a-7731-4b7c-80ce-f73dfa14e3a3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7c26861a-7731-4b7c-80ce-f73dfa14e3a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "52b97178-aeda-4610-85d8-ab465d29df4c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52b97178-aeda-4610-85d8-ab465d29df4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c92c6d0c-a27d-4b8c-acc5-5d8b89dbcb13"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c92c6d0c-a27d-4b8c-acc5-5d8b89dbcb13",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1e6c1588-fe4a-44d9-94f7-1fcf9f0af5c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1e6c1588-fe4a-44d9-94f7-1fcf9f0af5c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:01.309 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:01.309 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:01.309 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:01.309 04:38:50 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71051 00:15:01.309 04:38:50 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 71051 ']' 00:15:01.309 04:38:50 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 71051 00:15:01.309 04:38:50 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:15:01.309 04:38:50 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.309 04:38:50 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71051 00:15:01.309 killing process with pid 71051 00:15:01.309 04:38:50 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:01.309 04:38:50 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:01.309 04:38:50 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71051' 00:15:01.309 04:38:50 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 71051 00:15:01.309 04:38:50 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 71051 00:15:03.848 04:38:53 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:03.848 04:38:53 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:03.848 04:38:53 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:03.848 04:38:53 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:03.848 04:38:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.848 ************************************ 00:15:03.848 START TEST bdev_hello_world 00:15:03.848 ************************************ 00:15:03.848 04:38:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:04.105 [2024-10-15 04:38:53.388470] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:15:04.105 [2024-10-15 04:38:53.388593] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71427 ] 00:15:04.105 [2024-10-15 04:38:53.560873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.364 [2024-10-15 04:38:53.684725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.932 [2024-10-15 04:38:54.133434] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:04.932 [2024-10-15 04:38:54.133489] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:04.932 [2024-10-15 04:38:54.133508] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:04.932 [2024-10-15 04:38:54.135760] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:04.932 [2024-10-15 04:38:54.136077] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:04.932 [2024-10-15 04:38:54.136111] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:04.932 [2024-10-15 04:38:54.136312] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:04.932 00:15:04.932 [2024-10-15 04:38:54.136335] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:05.869 00:15:05.869 real 0m1.977s 00:15:05.869 user 0m1.607s 00:15:05.869 sys 0m0.254s 00:15:05.869 04:38:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:05.869 ************************************ 00:15:05.869 END TEST bdev_hello_world 00:15:05.869 ************************************ 00:15:05.869 04:38:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:05.869 04:38:55 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:05.869 04:38:55 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:05.869 04:38:55 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:05.869 04:38:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.869 ************************************ 00:15:05.869 START TEST bdev_bounds 00:15:05.869 ************************************ 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71469 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71469' 00:15:05.869 Process bdevio pid: 71469 00:15:05.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71469 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71469 ']' 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:05.869 04:38:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:06.128 [2024-10-15 04:38:55.448148] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:15:06.128 [2024-10-15 04:38:55.448279] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71469 ] 00:15:06.128 [2024-10-15 04:38:55.620943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.386 [2024-10-15 04:38:55.737706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.386 [2024-10-15 04:38:55.737957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.386 [2024-10-15 04:38:55.738020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.954 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.954 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:15:06.954 04:38:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:06.954 I/O targets: 00:15:06.954 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:06.954 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:06.954 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:06.954 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:06.954 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:06.954 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:06.954 00:15:06.954 00:15:06.954 CUnit - A unit testing framework for C - Version 2.1-3 00:15:06.954 http://cunit.sourceforge.net/ 00:15:06.954 00:15:06.954 00:15:06.954 Suite: bdevio tests on: nvme3n1 00:15:06.954 Test: blockdev write read block ...passed 00:15:06.954 Test: blockdev write zeroes read block ...passed 00:15:06.954 Test: blockdev write zeroes read no split ...passed 00:15:06.954 Test: blockdev write zeroes read split ...passed 00:15:06.954 Test: blockdev write zeroes read split partial ...passed 00:15:06.954 Test: blockdev reset ...passed 00:15:06.954 Test: blockdev write read 8 blocks ...passed 00:15:06.954 Test: blockdev write read size > 128k ...passed 00:15:06.954 Test: blockdev write read invalid size ...passed 00:15:06.954 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:06.954 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:06.954 Test: blockdev write read max offset ...passed 00:15:06.955 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:06.955 Test: blockdev writev readv 8 blocks ...passed 00:15:07.214 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.214 Test: blockdev writev readv block ...passed 00:15:07.214 Test: blockdev writev readv size > 128k ...passed 00:15:07.214 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.214 Test: blockdev comparev and writev ...passed 00:15:07.214 Test: blockdev nvme passthru rw ...passed 00:15:07.214 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.214 Test: blockdev nvme admin passthru ...passed 00:15:07.214 Test: blockdev copy ...passed 00:15:07.214 Suite: bdevio tests on: nvme2n3 00:15:07.214 Test: blockdev write read block ...passed 00:15:07.214 Test: blockdev write zeroes read block ...passed 00:15:07.214 Test: blockdev write zeroes read no split ...passed 00:15:07.214 Test: blockdev write zeroes read split ...passed 00:15:07.214 Test: blockdev write zeroes read split partial ...passed 00:15:07.214 Test: blockdev reset ...passed 00:15:07.214 Test: blockdev write read 8 blocks ...passed 00:15:07.214 Test: blockdev write read size > 128k ...passed 00:15:07.214 Test: blockdev write read invalid size ...passed 00:15:07.214 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.214 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.214 Test: blockdev write read max offset ...passed 00:15:07.214 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.214 Test: blockdev writev readv 8 blocks ...passed 00:15:07.214 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.214 Test: blockdev writev readv block ...passed 00:15:07.214 Test: blockdev writev readv size > 128k ...passed 00:15:07.214 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.214 Test: blockdev comparev and writev ...passed 00:15:07.214 Test: blockdev nvme passthru rw ...passed 00:15:07.214 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.214 Test: blockdev nvme admin passthru ...passed 00:15:07.214 Test: blockdev copy ...passed 00:15:07.214 Suite: bdevio tests on: nvme2n2 00:15:07.214 Test: blockdev write read block ...passed 00:15:07.214 Test: blockdev write zeroes read block ...passed 00:15:07.214 Test: blockdev write zeroes read no split ...passed 00:15:07.214 Test: blockdev write zeroes read split ...passed 00:15:07.214 Test: blockdev write zeroes read split partial ...passed 00:15:07.214 Test: blockdev reset ...passed 00:15:07.214 Test: blockdev write read 8 blocks ...passed 00:15:07.214 Test: blockdev write read size > 128k ...passed 00:15:07.214 Test: blockdev write read invalid size ...passed 00:15:07.214 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.214 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.214 Test: blockdev write read max offset ...passed 00:15:07.214 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.214 Test: blockdev writev readv 8 blocks ...passed 00:15:07.214 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.214 Test: blockdev writev readv block ...passed 00:15:07.214 Test: blockdev writev readv size > 128k ...passed 00:15:07.214 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.214 Test: blockdev comparev and writev ...passed 00:15:07.214 Test: blockdev nvme passthru rw ...passed 00:15:07.214 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.214 Test: blockdev nvme admin passthru ...passed 00:15:07.214 Test: blockdev copy ...passed 00:15:07.214 Suite: bdevio tests on: nvme2n1 00:15:07.214 Test: blockdev write read block ...passed 00:15:07.214 Test: blockdev write zeroes read block ...passed 00:15:07.214 Test: blockdev write zeroes read no split ...passed 00:15:07.214 Test: blockdev write zeroes read split ...passed 00:15:07.214 Test: blockdev write zeroes read split partial ...passed 00:15:07.214 Test: blockdev reset ...passed 00:15:07.214 Test: blockdev write read 8 blocks ...passed 00:15:07.214 Test: blockdev write read size > 128k ...passed 00:15:07.214 Test: blockdev write read invalid size ...passed 00:15:07.214 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.214 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.214 Test: blockdev write read max offset ...passed 00:15:07.214 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.214 Test: blockdev writev readv 8 blocks ...passed 00:15:07.214 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.214 Test: blockdev writev readv block ...passed 00:15:07.214 Test: blockdev writev readv size > 128k ...passed 00:15:07.214 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.214 Test: blockdev comparev and writev ...passed 00:15:07.214 Test: blockdev nvme passthru rw ...passed 00:15:07.214 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.214 Test: blockdev nvme admin passthru ...passed 00:15:07.214 Test: blockdev copy ...passed 00:15:07.214 Suite: bdevio tests on: nvme1n1 00:15:07.214 Test: blockdev write read block ...passed 00:15:07.214 Test: blockdev write zeroes read block ...passed 00:15:07.214 Test: blockdev write zeroes read no split ...passed 00:15:07.474 Test: blockdev write zeroes read split ...passed 00:15:07.474 Test: blockdev write zeroes read split partial ...passed 00:15:07.474 Test: blockdev reset ...passed 00:15:07.474 Test: blockdev write read 8 blocks ...passed 00:15:07.474 Test: blockdev write read size > 128k ...passed 00:15:07.474 Test: blockdev write read invalid size ...passed 00:15:07.474 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.474 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.474 Test: blockdev write read max offset ...passed 00:15:07.474 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.474 Test: blockdev writev readv 8 blocks ...passed 00:15:07.474 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.474 Test: blockdev writev readv block ...passed 00:15:07.474 Test: blockdev writev readv size > 128k ...passed 00:15:07.474 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.474 Test: blockdev comparev and writev ...passed 00:15:07.474 Test: blockdev nvme passthru rw ...passed 00:15:07.474 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.474 Test: blockdev nvme admin passthru ...passed 00:15:07.474 Test: blockdev copy ...passed 00:15:07.474 Suite: bdevio tests on: nvme0n1 00:15:07.474 Test: blockdev write read block ...passed 00:15:07.474 Test: blockdev write zeroes read block ...passed 00:15:07.474 Test: blockdev write zeroes read no split ...passed 00:15:07.474 Test: blockdev write zeroes read split ...passed 00:15:07.474 Test: blockdev write zeroes read split partial ...passed 00:15:07.474 Test: blockdev reset ...passed 00:15:07.474 Test: blockdev write read 8 blocks ...passed 00:15:07.474 Test: blockdev write read size > 128k ...passed 00:15:07.474 Test: blockdev write read invalid size ...passed 00:15:07.474 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.474 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.474 Test: blockdev write read max offset ...passed 00:15:07.474 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.474 Test: blockdev writev readv 8 blocks ...passed 00:15:07.474 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.474 Test: blockdev writev readv block ...passed 00:15:07.474 Test: blockdev writev readv size > 128k ...passed 00:15:07.474 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.474 Test: blockdev comparev and writev ...passed 00:15:07.474 Test: blockdev nvme passthru rw ...passed 00:15:07.474 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.474 Test: blockdev nvme admin passthru ...passed 00:15:07.474 Test: blockdev copy ...passed 00:15:07.474 00:15:07.474 Run Summary: Type Total Ran Passed Failed Inactive 00:15:07.474 suites 6 6 n/a 0 0 00:15:07.474 tests 138 138 138 0 0 00:15:07.474 asserts 780 780 780 0 n/a 00:15:07.474 00:15:07.474 Elapsed time = 1.298 seconds 00:15:07.474 0 00:15:07.474 04:38:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71469 00:15:07.474 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71469 ']' 00:15:07.474 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71469 00:15:07.474 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:15:07.474 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.474 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71469 00:15:07.474 killing process with pid 71469 00:15:07.474 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.474 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.474 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71469' 00:15:07.474 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71469 00:15:07.474 04:38:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71469 00:15:08.853 ************************************ 00:15:08.853 END TEST bdev_bounds 00:15:08.853 ************************************ 00:15:08.853 04:38:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:08.853 00:15:08.853 real 0m2.717s 00:15:08.853 user 0m6.738s 00:15:08.853 sys 0m0.425s 00:15:08.853 04:38:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.853 04:38:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:08.853 04:38:58 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:08.853 04:38:58 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:08.853 04:38:58 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.853 04:38:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.853 ************************************ 00:15:08.853 START TEST bdev_nbd 00:15:08.853 ************************************ 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71530 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71530 /var/tmp/spdk-nbd.sock 00:15:08.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71530 ']' 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.853 04:38:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:08.853 [2024-10-15 04:38:58.246024] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:15:08.853 [2024-10-15 04:38:58.246161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.112 [2024-10-15 04:38:58.421439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.112 [2024-10-15 04:38:58.541232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:09.679 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.938 1+0 records in 00:15:09.938 1+0 records out 00:15:09.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656352 s, 6.2 MB/s 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:09.938 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.197 1+0 records in 00:15:10.197 1+0 records out 00:15:10.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651836 s, 6.3 MB/s 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.197 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.455 1+0 records in 00:15:10.455 1+0 records out 00:15:10.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000862303 s, 4.8 MB/s 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.455 04:38:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.714 1+0 records in 00:15:10.714 1+0 records out 00:15:10.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678481 s, 6.0 MB/s 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.714 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.972 1+0 records in 00:15:10.972 1+0 records out 00:15:10.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120071 s, 3.4 MB/s 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.972 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.544 1+0 records in 00:15:11.544 1+0 records out 00:15:11.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000843839 s, 4.9 MB/s 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:11.544 04:39:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:11.802 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd0", 00:15:11.802 "bdev_name": "nvme0n1" 00:15:11.802 }, 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd1", 00:15:11.802 "bdev_name": "nvme1n1" 00:15:11.802 }, 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd2", 00:15:11.802 "bdev_name": "nvme2n1" 00:15:11.802 }, 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd3", 00:15:11.802 "bdev_name": "nvme2n2" 00:15:11.802 }, 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd4", 00:15:11.802 "bdev_name": "nvme2n3" 00:15:11.802 }, 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd5", 00:15:11.802 "bdev_name": "nvme3n1" 00:15:11.802 } 00:15:11.802 ]' 00:15:11.802 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:11.802 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd0", 00:15:11.802 "bdev_name": "nvme0n1" 00:15:11.802 }, 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd1", 00:15:11.802 "bdev_name": "nvme1n1" 00:15:11.802 }, 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd2", 00:15:11.802 "bdev_name": "nvme2n1" 00:15:11.802 }, 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd3", 00:15:11.802 "bdev_name": "nvme2n2" 00:15:11.802 }, 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd4", 00:15:11.802 "bdev_name": "nvme2n3" 00:15:11.802 }, 00:15:11.802 { 00:15:11.802 "nbd_device": "/dev/nbd5", 00:15:11.802 "bdev_name": "nvme3n1" 00:15:11.802 } 00:15:11.802 ]' 00:15:11.802 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:11.802 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:11.802 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:11.802 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:11.802 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.802 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:11.802 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.802 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:12.061 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.061 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.061 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.061 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.061 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.061 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.061 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.061 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.061 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.061 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:12.319 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:12.319 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:12.319 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:12.319 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.319 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.319 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:12.319 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.319 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.319 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.319 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:12.578 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:12.578 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:12.578 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:12.578 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.578 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.578 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:12.578 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.578 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.578 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.578 04:39:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:12.837 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:12.837 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:12.837 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:12.837 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.837 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.837 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:12.837 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.837 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.837 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.837 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.095 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:13.355 04:39:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:13.613 /dev/nbd0 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.613 1+0 records in 00:15:13.613 1+0 records out 00:15:13.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555239 s, 7.4 MB/s 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:13.613 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:13.871 /dev/nbd1 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.871 1+0 records in 00:15:13.871 1+0 records out 00:15:13.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497318 s, 8.2 MB/s 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:13.871 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:15:14.130 /dev/nbd10 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.130 1+0 records in 00:15:14.130 1+0 records out 00:15:14.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706897 s, 5.8 MB/s 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:14.130 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:15:14.390 /dev/nbd11 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.390 1+0 records in 00:15:14.390 1+0 records out 00:15:14.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631162 s, 6.5 MB/s 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:14.390 04:39:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:15:14.648 /dev/nbd12 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.909 1+0 records in 00:15:14.909 1+0 records out 00:15:14.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000817339 s, 5.0 MB/s 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:14.909 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:15.168 /dev/nbd13 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.168 1+0 records in 00:15:15.168 1+0 records out 00:15:15.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575549 s, 7.1 MB/s 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:15.168 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd0", 00:15:15.427 "bdev_name": "nvme0n1" 00:15:15.427 }, 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd1", 00:15:15.427 "bdev_name": "nvme1n1" 00:15:15.427 }, 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd10", 00:15:15.427 "bdev_name": "nvme2n1" 00:15:15.427 }, 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd11", 00:15:15.427 "bdev_name": "nvme2n2" 00:15:15.427 }, 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd12", 00:15:15.427 "bdev_name": "nvme2n3" 00:15:15.427 }, 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd13", 00:15:15.427 "bdev_name": "nvme3n1" 00:15:15.427 } 00:15:15.427 ]' 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd0", 00:15:15.427 "bdev_name": "nvme0n1" 00:15:15.427 }, 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd1", 00:15:15.427 "bdev_name": "nvme1n1" 00:15:15.427 }, 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd10", 00:15:15.427 "bdev_name": "nvme2n1" 00:15:15.427 }, 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd11", 00:15:15.427 "bdev_name": "nvme2n2" 00:15:15.427 }, 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd12", 00:15:15.427 "bdev_name": "nvme2n3" 00:15:15.427 }, 00:15:15.427 { 00:15:15.427 "nbd_device": "/dev/nbd13", 00:15:15.427 "bdev_name": "nvme3n1" 00:15:15.427 } 00:15:15.427 ]' 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:15.427 /dev/nbd1 00:15:15.427 /dev/nbd10 00:15:15.427 /dev/nbd11 00:15:15.427 /dev/nbd12 00:15:15.427 /dev/nbd13' 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:15.427 /dev/nbd1 00:15:15.427 /dev/nbd10 00:15:15.427 /dev/nbd11 00:15:15.427 /dev/nbd12 00:15:15.427 /dev/nbd13' 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:15.427 256+0 records in 00:15:15.427 256+0 records out 00:15:15.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128849 s, 81.4 MB/s 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.427 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:15.427 256+0 records in 00:15:15.427 256+0 records out 00:15:15.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118949 s, 8.8 MB/s 00:15:15.428 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.428 04:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:15.686 256+0 records in 00:15:15.686 256+0 records out 00:15:15.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142253 s, 7.4 MB/s 00:15:15.686 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.686 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:15.686 256+0 records in 00:15:15.686 256+0 records out 00:15:15.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124195 s, 8.4 MB/s 00:15:15.686 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.686 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:15.945 256+0 records in 00:15:15.945 256+0 records out 00:15:15.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119406 s, 8.8 MB/s 00:15:15.945 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.945 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:15.946 256+0 records in 00:15:15.946 256+0 records out 00:15:15.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122694 s, 8.5 MB/s 00:15:15.946 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.946 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:16.204 256+0 records in 00:15:16.205 256+0 records out 00:15:16.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121668 s, 8.6 MB/s 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.205 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:16.465 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:16.465 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:16.465 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:16.465 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.465 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.465 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:16.465 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.465 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.465 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.465 04:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:16.778 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:16.778 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:16.778 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:16.778 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.778 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.778 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:16.778 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.778 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.778 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.778 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:17.059 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:17.059 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:17.059 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.060 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:17.319 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:17.319 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:17.319 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:17.319 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.319 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.319 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:17.319 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.319 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.319 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.319 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:17.577 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:17.577 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:17.577 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:17.577 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.577 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.577 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:17.577 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.577 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.577 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:17.577 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.577 04:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:17.836 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:18.094 malloc_lvol_verify 00:15:18.094 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:18.353 12aab167-dc75-49b4-a49b-9e23bf1f07d5 00:15:18.353 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:18.611 959b88fc-811e-4e62-848b-ed78ddb7053f 00:15:18.611 04:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:18.870 /dev/nbd0 00:15:18.870 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:18.870 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:18.870 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:18.870 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:18.870 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:18.870 mke2fs 1.47.0 (5-Feb-2023) 00:15:18.870 Discarding device blocks: 0/4096 done 00:15:18.871 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:18.871 00:15:18.871 Allocating group tables: 0/1 done 00:15:18.871 Writing inode tables: 0/1 done 00:15:18.871 Creating journal (1024 blocks): done 00:15:18.871 Writing superblocks and filesystem accounting information: 0/1 done 00:15:18.871 00:15:18.871 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:18.871 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:18.871 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:18.871 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:18.871 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:18.871 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.871 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71530 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71530 ']' 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71530 00:15:19.129 04:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:15:19.130 04:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.130 04:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71530 00:15:19.130 04:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.130 04:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.130 04:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71530' 00:15:19.130 killing process with pid 71530 00:15:19.130 04:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71530 00:15:19.130 04:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71530 00:15:20.506 04:39:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:20.506 00:15:20.506 real 0m11.543s 00:15:20.506 user 0m15.045s 00:15:20.506 sys 0m4.882s 00:15:20.506 04:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.506 ************************************ 00:15:20.506 END TEST bdev_nbd 00:15:20.506 04:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:20.506 ************************************ 00:15:20.506 04:39:09 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:20.506 04:39:09 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:20.506 04:39:09 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:20.506 04:39:09 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:20.506 04:39:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:20.506 04:39:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.506 04:39:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.506 ************************************ 00:15:20.506 START TEST bdev_fio 00:15:20.506 ************************************ 00:15:20.506 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:15:20.506 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:20.506 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:20.506 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:20.506 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:20.506 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:20.506 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:20.506 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:20.506 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:20.506 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:20.506 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:20.507 ************************************ 00:15:20.507 START TEST bdev_fio_rw_verify 00:15:20.507 ************************************ 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:20.507 04:39:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:20.765 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:20.765 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:20.765 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:20.765 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:20.765 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:20.765 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:20.765 fio-3.35 00:15:20.765 Starting 6 threads 00:15:32.971 00:15:32.971 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71947: Tue Oct 15 04:39:21 2024 00:15:32.971 read: IOPS=33.0k, BW=129MiB/s (135MB/s)(1289MiB/10002msec) 00:15:32.971 slat (usec): min=2, max=2488, avg= 6.11, stdev= 6.42 00:15:32.971 clat (usec): min=99, max=18221, avg=560.39, stdev=241.28 00:15:32.971 lat (usec): min=110, max=18230, avg=566.51, stdev=242.09 00:15:32.971 clat percentiles (usec): 00:15:32.971 | 50.000th=[ 562], 99.000th=[ 1205], 99.900th=[ 2180], 99.990th=[ 4178], 00:15:32.971 | 99.999th=[17695] 00:15:32.971 write: IOPS=33.4k, BW=130MiB/s (137MB/s)(1304MiB/10002msec); 0 zone resets 00:15:32.971 slat (usec): min=11, max=4609, avg=23.79, stdev=34.25 00:15:32.971 clat (usec): min=72, max=5290, avg=640.72, stdev=246.96 00:15:32.971 lat (usec): min=90, max=5341, avg=664.51, stdev=252.66 00:15:32.972 clat percentiles (usec): 00:15:32.972 | 50.000th=[ 627], 99.000th=[ 1450], 99.900th=[ 2180], 99.990th=[ 3490], 00:15:32.972 | 99.999th=[ 5145] 00:15:32.972 bw ( KiB/s): min=102779, max=158720, per=100.00%, avg=133703.63, stdev=2468.28, samples=114 00:15:32.972 iops : min=25694, max=39679, avg=33425.53, stdev=617.07, samples=114 00:15:32.972 lat (usec) : 100=0.01%, 250=5.00%, 500=26.66%, 750=50.00%, 1000=13.68% 00:15:32.972 lat (msec) : 2=4.49%, 4=0.15%, 10=0.01%, 20=0.01% 00:15:32.972 cpu : usr=55.76%, sys=30.09%, ctx=7868, majf=0, minf=27465 00:15:32.972 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:32.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.972 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.972 issued rwts: total=329963,333757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.972 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:32.972 00:15:32.972 Run status group 0 (all jobs): 00:15:32.972 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=1289MiB (1352MB), run=10002-10002msec 00:15:32.972 WRITE: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=1304MiB (1367MB), run=10002-10002msec 00:15:32.972 ----------------------------------------------------- 00:15:32.972 Suppressions used: 00:15:32.972 count bytes template 00:15:32.972 6 48 /usr/src/fio/parse.c 00:15:32.972 3551 340896 /usr/src/fio/iolog.c 00:15:32.972 1 8 libtcmalloc_minimal.so 00:15:32.972 1 904 libcrypto.so 00:15:32.972 ----------------------------------------------------- 00:15:32.972 00:15:33.230 00:15:33.230 real 0m12.638s 00:15:33.230 user 0m35.572s 00:15:33.230 sys 0m18.491s 00:15:33.230 ************************************ 00:15:33.230 END TEST bdev_fio_rw_verify 00:15:33.230 ************************************ 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:15:33.230 04:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:33.231 04:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d8cb1ea1-c384-4bd9-98ea-e3c7b1981654"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d8cb1ea1-c384-4bd9-98ea-e3c7b1981654",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8d6880c0-71a5-45e3-86f1-88c2fd248687"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8d6880c0-71a5-45e3-86f1-88c2fd248687",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7c26861a-7731-4b7c-80ce-f73dfa14e3a3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7c26861a-7731-4b7c-80ce-f73dfa14e3a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "52b97178-aeda-4610-85d8-ab465d29df4c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52b97178-aeda-4610-85d8-ab465d29df4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c92c6d0c-a27d-4b8c-acc5-5d8b89dbcb13"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c92c6d0c-a27d-4b8c-acc5-5d8b89dbcb13",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1e6c1588-fe4a-44d9-94f7-1fcf9f0af5c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1e6c1588-fe4a-44d9-94f7-1fcf9f0af5c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:33.231 04:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:33.231 04:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:33.231 04:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:33.231 /home/vagrant/spdk_repo/spdk 00:15:33.231 04:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:33.231 04:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:33.231 00:15:33.231 real 0m12.878s 00:15:33.231 user 0m35.689s 00:15:33.231 sys 0m18.620s 00:15:33.231 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:33.231 04:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:33.231 ************************************ 00:15:33.231 END TEST bdev_fio 00:15:33.231 ************************************ 00:15:33.231 04:39:22 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:33.231 04:39:22 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:33.231 04:39:22 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:33.231 04:39:22 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:33.231 04:39:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:33.231 ************************************ 00:15:33.231 START TEST bdev_verify 00:15:33.231 ************************************ 00:15:33.231 04:39:22 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:33.492 [2024-10-15 04:39:22.811567] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:15:33.492 [2024-10-15 04:39:22.811705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72117 ] 00:15:33.492 [2024-10-15 04:39:22.990223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:33.750 [2024-10-15 04:39:23.112547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.750 [2024-10-15 04:39:23.112590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.317 Running I/O for 5 seconds... 00:15:36.624 22464.00 IOPS, 87.75 MiB/s [2024-10-15T04:39:27.073Z] 22816.00 IOPS, 89.12 MiB/s [2024-10-15T04:39:28.009Z] 23189.33 IOPS, 90.58 MiB/s [2024-10-15T04:39:28.944Z] 24096.00 IOPS, 94.12 MiB/s [2024-10-15T04:39:28.944Z] 23616.00 IOPS, 92.25 MiB/s 00:15:39.440 Latency(us) 00:15:39.440 [2024-10-15T04:39:28.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.440 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0x0 length 0xa0000 00:15:39.440 nvme0n1 : 5.05 1749.44 6.83 0.00 0.00 73047.73 13265.12 72852.87 00:15:39.440 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0xa0000 length 0xa0000 00:15:39.440 nvme0n1 : 5.05 1775.47 6.94 0.00 0.00 71977.78 7369.51 70326.18 00:15:39.440 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0x0 length 0xbd0bd 00:15:39.440 nvme1n1 : 5.05 2781.60 10.87 0.00 0.00 45846.32 5211.30 66115.03 00:15:39.440 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:39.440 nvme1n1 : 5.02 2898.38 11.32 0.00 0.00 44001.71 5290.26 60219.42 00:15:39.440 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0x0 length 0x80000 00:15:39.440 nvme2n1 : 5.06 1769.23 6.91 0.00 0.00 71797.97 12528.17 84222.97 00:15:39.440 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0x80000 length 0x80000 00:15:39.440 nvme2n1 : 5.05 1799.47 7.03 0.00 0.00 70824.18 7790.62 63167.23 00:15:39.440 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0x0 length 0x80000 00:15:39.440 nvme2n2 : 5.05 1747.54 6.83 0.00 0.00 72531.24 12528.17 69483.95 00:15:39.440 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0x80000 length 0x80000 00:15:39.440 nvme2n2 : 5.04 1778.10 6.95 0.00 0.00 71475.53 7790.62 62325.00 00:15:39.440 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0x0 length 0x80000 00:15:39.440 nvme2n3 : 5.08 1765.14 6.90 0.00 0.00 71715.30 6711.52 68220.61 00:15:39.440 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0x80000 length 0x80000 00:15:39.440 nvme2n3 : 5.06 1797.10 7.02 0.00 0.00 70591.16 2921.48 62746.11 00:15:39.440 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0x0 length 0x20000 00:15:39.440 nvme3n1 : 5.07 1766.04 6.90 0.00 0.00 71633.30 3474.20 70326.18 00:15:39.440 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.440 Verification LBA range: start 0x20000 length 0x20000 00:15:39.440 nvme3n1 : 5.06 1796.62 7.02 0.00 0.00 70502.89 3605.80 66536.15 00:15:39.440 [2024-10-15T04:39:28.944Z] =================================================================================================================== 00:15:39.440 [2024-10-15T04:39:28.944Z] Total : 23424.13 91.50 0.00 0.00 65148.69 2921.48 84222.97 00:15:40.814 00:15:40.814 real 0m7.254s 00:15:40.814 user 0m11.146s 00:15:40.814 sys 0m2.058s 00:15:40.814 04:39:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.814 ************************************ 00:15:40.814 END TEST bdev_verify 00:15:40.814 ************************************ 00:15:40.814 04:39:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:40.814 04:39:30 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:40.814 04:39:30 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:40.814 04:39:30 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.814 04:39:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.814 ************************************ 00:15:40.814 START TEST bdev_verify_big_io 00:15:40.814 ************************************ 00:15:40.814 04:39:30 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:40.814 [2024-10-15 04:39:30.152669] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:15:40.814 [2024-10-15 04:39:30.152844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72219 ] 00:15:41.073 [2024-10-15 04:39:30.340995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:41.073 [2024-10-15 04:39:30.462213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.073 [2024-10-15 04:39:30.462244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.639 Running I/O for 5 seconds... 00:15:46.880 1464.00 IOPS, 91.50 MiB/s [2024-10-15T04:39:36.951Z] 2934.00 IOPS, 183.38 MiB/s [2024-10-15T04:39:36.951Z] 3449.67 IOPS, 215.60 MiB/s 00:15:47.447 Latency(us) 00:15:47.447 [2024-10-15T04:39:36.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.447 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0x0 length 0xa000 00:15:47.447 nvme0n1 : 5.65 144.31 9.02 0.00 0.00 859362.02 14317.91 1091529.72 00:15:47.447 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0xa000 length 0xa000 00:15:47.447 nvme0n1 : 5.77 134.60 8.41 0.00 0.00 920825.16 176868.24 916345.93 00:15:47.447 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0x0 length 0xbd0b 00:15:47.447 nvme1n1 : 5.66 155.58 9.72 0.00 0.00 772125.10 30530.83 882656.75 00:15:47.447 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:47.447 nvme1n1 : 5.78 135.96 8.50 0.00 0.00 899579.76 24951.06 2223486.46 00:15:47.447 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0x0 length 0x8000 00:15:47.447 nvme2n1 : 5.76 130.54 8.16 0.00 0.00 898647.77 46743.75 1185859.44 00:15:47.447 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0x8000 length 0x8000 00:15:47.447 nvme2n1 : 5.78 160.47 10.03 0.00 0.00 738604.83 80011.82 936559.45 00:15:47.447 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0x0 length 0x8000 00:15:47.447 nvme2n2 : 5.76 153.23 9.58 0.00 0.00 745458.07 49270.44 1280189.17 00:15:47.447 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0x8000 length 0x8000 00:15:47.447 nvme2n2 : 5.77 141.30 8.83 0.00 0.00 809697.31 71168.41 1940497.27 00:15:47.447 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0x0 length 0x8000 00:15:47.447 nvme2n3 : 5.82 169.02 10.56 0.00 0.00 666776.59 25372.17 1320616.20 00:15:47.447 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0x8000 length 0x8000 00:15:47.447 nvme2n3 : 5.78 150.91 9.43 0.00 0.00 748608.84 56008.28 1994399.97 00:15:47.447 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0x0 length 0x2000 00:15:47.447 nvme3n1 : 5.82 173.32 10.83 0.00 0.00 635280.62 11106.90 774851.34 00:15:47.447 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:47.447 Verification LBA range: start 0x2000 length 0x2000 00:15:47.447 nvme3n1 : 5.79 187.97 11.75 0.00 0.00 589922.26 7264.23 781589.18 00:15:47.447 [2024-10-15T04:39:36.951Z] =================================================================================================================== 00:15:47.447 [2024-10-15T04:39:36.951Z] Total : 1837.21 114.83 0.00 0.00 762449.71 7264.23 2223486.46 00:15:49.348 00:15:49.348 real 0m8.289s 00:15:49.348 user 0m15.001s 00:15:49.348 sys 0m0.595s 00:15:49.348 04:39:38 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:49.348 04:39:38 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.348 ************************************ 00:15:49.348 END TEST bdev_verify_big_io 00:15:49.348 ************************************ 00:15:49.348 04:39:38 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:49.348 04:39:38 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:49.348 04:39:38 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:49.348 04:39:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.348 ************************************ 00:15:49.348 START TEST bdev_write_zeroes 00:15:49.348 ************************************ 00:15:49.348 04:39:38 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:49.348 [2024-10-15 04:39:38.499321] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:15:49.348 [2024-10-15 04:39:38.499651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72343 ] 00:15:49.348 [2024-10-15 04:39:38.669977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.348 [2024-10-15 04:39:38.784116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.913 Running I/O for 1 seconds... 00:15:50.847 49792.00 IOPS, 194.50 MiB/s 00:15:50.847 Latency(us) 00:15:50.847 [2024-10-15T04:39:40.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.847 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.847 nvme0n1 : 1.03 7607.80 29.72 0.00 0.00 16811.33 9001.33 28635.81 00:15:50.847 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.847 nvme1n1 : 1.03 11196.35 43.74 0.00 0.00 11377.81 6237.76 24319.38 00:15:50.847 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.847 nvme2n1 : 1.03 7597.53 29.68 0.00 0.00 16705.38 9159.25 28004.14 00:15:50.847 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.847 nvme2n2 : 1.03 7589.51 29.65 0.00 0.00 16715.83 9053.97 28425.25 00:15:50.847 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.847 nvme2n3 : 1.03 7581.78 29.62 0.00 0.00 16722.81 9053.97 28635.81 00:15:50.847 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.847 nvme3n1 : 1.03 7574.12 29.59 0.00 0.00 16727.81 9001.33 28846.37 00:15:50.847 [2024-10-15T04:39:40.351Z] =================================================================================================================== 00:15:50.847 [2024-10-15T04:39:40.351Z] Total : 49147.10 191.98 0.00 0.00 15513.02 6237.76 28846.37 00:15:52.224 00:15:52.224 real 0m3.035s 00:15:52.224 user 0m2.225s 00:15:52.224 sys 0m0.597s 00:15:52.224 04:39:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.224 04:39:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:52.224 ************************************ 00:15:52.224 END TEST bdev_write_zeroes 00:15:52.224 ************************************ 00:15:52.224 04:39:41 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:52.224 04:39:41 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:52.224 04:39:41 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.224 04:39:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.224 ************************************ 00:15:52.224 START TEST bdev_json_nonenclosed 00:15:52.224 ************************************ 00:15:52.224 04:39:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:52.224 [2024-10-15 04:39:41.607630] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:15:52.224 [2024-10-15 04:39:41.607754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72404 ] 00:15:52.482 [2024-10-15 04:39:41.780716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.482 [2024-10-15 04:39:41.908514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.482 [2024-10-15 04:39:41.908802] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:52.482 [2024-10-15 04:39:41.908848] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:52.482 [2024-10-15 04:39:41.908862] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:52.739 00:15:52.739 real 0m0.662s 00:15:52.739 user 0m0.415s 00:15:52.739 sys 0m0.141s 00:15:52.739 04:39:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.739 ************************************ 00:15:52.739 END TEST bdev_json_nonenclosed 00:15:52.740 ************************************ 00:15:52.740 04:39:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:52.740 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:52.740 04:39:42 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:52.740 04:39:42 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.740 04:39:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.740 ************************************ 00:15:52.740 START TEST bdev_json_nonarray 00:15:52.740 ************************************ 00:15:52.740 04:39:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:52.998 [2024-10-15 04:39:42.340151] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:15:52.998 [2024-10-15 04:39:42.340278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72425 ] 00:15:53.265 [2024-10-15 04:39:42.515757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.265 [2024-10-15 04:39:42.636387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.265 [2024-10-15 04:39:42.636499] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:53.265 [2024-10-15 04:39:42.636521] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:53.265 [2024-10-15 04:39:42.636534] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:53.524 00:15:53.524 real 0m0.653s 00:15:53.524 user 0m0.411s 00:15:53.524 sys 0m0.136s 00:15:53.524 ************************************ 00:15:53.524 END TEST bdev_json_nonarray 00:15:53.524 ************************************ 00:15:53.524 04:39:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.524 04:39:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:53.524 04:39:42 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:54.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:02.596 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:02.596 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:02.596 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:02.596 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:02.596 00:16:02.596 real 1m8.830s 00:16:02.596 user 1m40.394s 00:16:02.596 sys 0m37.699s 00:16:02.596 ************************************ 00:16:02.596 END TEST blockdev_xnvme 00:16:02.596 ************************************ 00:16:02.596 04:39:51 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.596 04:39:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:02.596 04:39:51 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:02.596 04:39:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:02.596 04:39:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.596 04:39:51 -- common/autotest_common.sh@10 -- # set +x 00:16:02.596 ************************************ 00:16:02.596 START TEST ublk 00:16:02.596 ************************************ 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:02.596 * Looking for test storage... 00:16:02.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:02.596 04:39:51 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.596 04:39:51 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.596 04:39:51 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.596 04:39:51 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.596 04:39:51 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.596 04:39:51 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.596 04:39:51 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.596 04:39:51 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.596 04:39:51 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.596 04:39:51 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.596 04:39:51 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.596 04:39:51 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:02.596 04:39:51 ublk -- scripts/common.sh@345 -- # : 1 00:16:02.596 04:39:51 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.596 04:39:51 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.596 04:39:51 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:02.596 04:39:51 ublk -- scripts/common.sh@353 -- # local d=1 00:16:02.596 04:39:51 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.596 04:39:51 ublk -- scripts/common.sh@355 -- # echo 1 00:16:02.596 04:39:51 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.596 04:39:51 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:02.596 04:39:51 ublk -- scripts/common.sh@353 -- # local d=2 00:16:02.596 04:39:51 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.596 04:39:51 ublk -- scripts/common.sh@355 -- # echo 2 00:16:02.596 04:39:51 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.596 04:39:51 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.596 04:39:51 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.596 04:39:51 ublk -- scripts/common.sh@368 -- # return 0 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:02.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.596 --rc genhtml_branch_coverage=1 00:16:02.596 --rc genhtml_function_coverage=1 00:16:02.596 --rc genhtml_legend=1 00:16:02.596 --rc geninfo_all_blocks=1 00:16:02.596 --rc geninfo_unexecuted_blocks=1 00:16:02.596 00:16:02.596 ' 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:02.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.596 --rc genhtml_branch_coverage=1 00:16:02.596 --rc genhtml_function_coverage=1 00:16:02.596 --rc genhtml_legend=1 00:16:02.596 --rc geninfo_all_blocks=1 00:16:02.596 --rc geninfo_unexecuted_blocks=1 00:16:02.596 00:16:02.596 ' 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:02.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.596 --rc genhtml_branch_coverage=1 00:16:02.596 --rc genhtml_function_coverage=1 00:16:02.596 --rc genhtml_legend=1 00:16:02.596 --rc geninfo_all_blocks=1 00:16:02.596 --rc geninfo_unexecuted_blocks=1 00:16:02.596 00:16:02.596 ' 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:02.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.596 --rc genhtml_branch_coverage=1 00:16:02.596 --rc genhtml_function_coverage=1 00:16:02.596 --rc genhtml_legend=1 00:16:02.596 --rc geninfo_all_blocks=1 00:16:02.596 --rc geninfo_unexecuted_blocks=1 00:16:02.596 00:16:02.596 ' 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:02.596 04:39:51 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:02.596 04:39:51 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:02.596 04:39:51 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:02.596 04:39:51 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:02.596 04:39:51 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:02.596 04:39:51 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:02.596 04:39:51 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:02.596 04:39:51 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:02.596 04:39:51 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.596 04:39:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:02.596 ************************************ 00:16:02.596 START TEST test_save_ublk_config 00:16:02.596 ************************************ 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72726 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72726 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 72726 ']' 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.596 04:39:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:02.596 [2024-10-15 04:39:51.708557] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:16:02.596 [2024-10-15 04:39:51.708681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72726 ] 00:16:02.596 [2024-10-15 04:39:51.881194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.596 [2024-10-15 04:39:51.997869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.534 04:39:52 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.534 04:39:52 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:16:03.534 04:39:52 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:03.534 04:39:52 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:03.534 04:39:52 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.534 04:39:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:03.534 [2024-10-15 04:39:52.918836] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:03.534 [2024-10-15 04:39:52.919780] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:03.534 malloc0 00:16:03.534 [2024-10-15 04:39:53.006994] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:03.534 [2024-10-15 04:39:53.007105] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:03.534 [2024-10-15 04:39:53.007118] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:03.534 [2024-10-15 04:39:53.007127] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:03.534 [2024-10-15 04:39:53.015915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:03.534 [2024-10-15 04:39:53.015943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:03.534 [2024-10-15 04:39:53.022850] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:03.534 [2024-10-15 04:39:53.022946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:03.793 [2024-10-15 04:39:53.039847] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:03.793 0 00:16:03.793 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.793 04:39:53 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:03.793 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.793 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:04.053 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.053 04:39:53 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:04.053 "subsystems": [ 00:16:04.053 { 00:16:04.053 "subsystem": "fsdev", 00:16:04.053 "config": [ 00:16:04.053 { 00:16:04.053 "method": "fsdev_set_opts", 00:16:04.053 "params": { 00:16:04.053 "fsdev_io_pool_size": 65535, 00:16:04.053 "fsdev_io_cache_size": 256 00:16:04.053 } 00:16:04.053 } 00:16:04.053 ] 00:16:04.053 }, 00:16:04.053 { 00:16:04.053 "subsystem": "keyring", 00:16:04.053 "config": [] 00:16:04.053 }, 00:16:04.053 { 00:16:04.053 "subsystem": "iobuf", 00:16:04.053 "config": [ 00:16:04.053 { 00:16:04.053 "method": "iobuf_set_options", 00:16:04.053 "params": { 00:16:04.053 "small_pool_count": 8192, 00:16:04.053 "large_pool_count": 1024, 00:16:04.053 "small_bufsize": 8192, 00:16:04.053 "large_bufsize": 135168 00:16:04.053 } 00:16:04.053 } 00:16:04.053 ] 00:16:04.053 }, 00:16:04.053 { 00:16:04.053 "subsystem": "sock", 00:16:04.053 "config": [ 00:16:04.053 { 00:16:04.053 "method": "sock_set_default_impl", 00:16:04.053 "params": { 00:16:04.053 "impl_name": "posix" 00:16:04.053 } 00:16:04.053 }, 00:16:04.053 { 00:16:04.053 "method": "sock_impl_set_options", 00:16:04.053 "params": { 00:16:04.053 "impl_name": "ssl", 00:16:04.053 "recv_buf_size": 4096, 00:16:04.053 "send_buf_size": 4096, 00:16:04.053 "enable_recv_pipe": true, 00:16:04.053 "enable_quickack": false, 00:16:04.053 "enable_placement_id": 0, 00:16:04.053 "enable_zerocopy_send_server": true, 00:16:04.053 "enable_zerocopy_send_client": false, 00:16:04.053 "zerocopy_threshold": 0, 00:16:04.053 "tls_version": 0, 00:16:04.053 "enable_ktls": false 00:16:04.053 } 00:16:04.053 }, 00:16:04.053 { 00:16:04.053 "method": "sock_impl_set_options", 00:16:04.053 "params": { 00:16:04.053 "impl_name": "posix", 00:16:04.053 "recv_buf_size": 2097152, 00:16:04.053 "send_buf_size": 2097152, 00:16:04.053 "enable_recv_pipe": true, 00:16:04.053 "enable_quickack": false, 00:16:04.053 "enable_placement_id": 0, 00:16:04.053 "enable_zerocopy_send_server": true, 00:16:04.053 "enable_zerocopy_send_client": false, 00:16:04.053 "zerocopy_threshold": 0, 00:16:04.053 "tls_version": 0, 00:16:04.053 "enable_ktls": false 00:16:04.053 } 00:16:04.053 } 00:16:04.053 ] 00:16:04.053 }, 00:16:04.053 { 00:16:04.053 "subsystem": "vmd", 00:16:04.053 "config": [] 00:16:04.053 }, 00:16:04.053 { 00:16:04.053 "subsystem": "accel", 00:16:04.053 "config": [ 00:16:04.053 { 00:16:04.053 "method": "accel_set_options", 00:16:04.053 "params": { 00:16:04.053 "small_cache_size": 128, 00:16:04.053 "large_cache_size": 16, 00:16:04.053 "task_count": 2048, 00:16:04.053 "sequence_count": 2048, 00:16:04.053 "buf_count": 2048 00:16:04.053 } 00:16:04.053 } 00:16:04.053 ] 00:16:04.053 }, 00:16:04.053 { 00:16:04.053 "subsystem": "bdev", 00:16:04.053 "config": [ 00:16:04.053 { 00:16:04.053 "method": "bdev_set_options", 00:16:04.053 "params": { 00:16:04.053 "bdev_io_pool_size": 65535, 00:16:04.053 "bdev_io_cache_size": 256, 00:16:04.053 "bdev_auto_examine": true, 00:16:04.053 "iobuf_small_cache_size": 128, 00:16:04.053 "iobuf_large_cache_size": 16 00:16:04.053 } 00:16:04.053 }, 00:16:04.053 { 00:16:04.053 "method": "bdev_raid_set_options", 00:16:04.053 "params": { 00:16:04.053 "process_window_size_kb": 1024, 00:16:04.053 "process_max_bandwidth_mb_sec": 0 00:16:04.053 } 00:16:04.053 }, 00:16:04.053 { 00:16:04.053 "method": "bdev_iscsi_set_options", 00:16:04.053 "params": { 00:16:04.053 "timeout_sec": 30 00:16:04.053 } 00:16:04.053 }, 00:16:04.053 { 00:16:04.053 "method": "bdev_nvme_set_options", 00:16:04.053 "params": { 00:16:04.053 "action_on_timeout": "none", 00:16:04.053 "timeout_us": 0, 00:16:04.053 "timeout_admin_us": 0, 00:16:04.053 "keep_alive_timeout_ms": 10000, 00:16:04.053 "arbitration_burst": 0, 00:16:04.053 "low_priority_weight": 0, 00:16:04.053 "medium_priority_weight": 0, 00:16:04.053 "high_priority_weight": 0, 00:16:04.053 "nvme_adminq_poll_period_us": 10000, 00:16:04.053 "nvme_ioq_poll_period_us": 0, 00:16:04.053 "io_queue_requests": 0, 00:16:04.053 "delay_cmd_submit": true, 00:16:04.053 "transport_retry_count": 4, 00:16:04.053 "bdev_retry_count": 3, 00:16:04.053 "transport_ack_timeout": 0, 00:16:04.053 "ctrlr_loss_timeout_sec": 0, 00:16:04.053 "reconnect_delay_sec": 0, 00:16:04.053 "fast_io_fail_timeout_sec": 0, 00:16:04.053 "disable_auto_failback": false, 00:16:04.054 "generate_uuids": false, 00:16:04.054 "transport_tos": 0, 00:16:04.054 "nvme_error_stat": false, 00:16:04.054 "rdma_srq_size": 0, 00:16:04.054 "io_path_stat": false, 00:16:04.054 "allow_accel_sequence": false, 00:16:04.054 "rdma_max_cq_size": 0, 00:16:04.054 "rdma_cm_event_timeout_ms": 0, 00:16:04.054 "dhchap_digests": [ 00:16:04.054 "sha256", 00:16:04.054 "sha384", 00:16:04.054 "sha512" 00:16:04.054 ], 00:16:04.054 "dhchap_dhgroups": [ 00:16:04.054 "null", 00:16:04.054 "ffdhe2048", 00:16:04.054 "ffdhe3072", 00:16:04.054 "ffdhe4096", 00:16:04.054 "ffdhe6144", 00:16:04.054 "ffdhe8192" 00:16:04.054 ] 00:16:04.054 } 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "method": "bdev_nvme_set_hotplug", 00:16:04.054 "params": { 00:16:04.054 "period_us": 100000, 00:16:04.054 "enable": false 00:16:04.054 } 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "method": "bdev_malloc_create", 00:16:04.054 "params": { 00:16:04.054 "name": "malloc0", 00:16:04.054 "num_blocks": 8192, 00:16:04.054 "block_size": 4096, 00:16:04.054 "physical_block_size": 4096, 00:16:04.054 "uuid": "ceca2ac4-03aa-4617-bc25-a7f250623d79", 00:16:04.054 "optimal_io_boundary": 0, 00:16:04.054 "md_size": 0, 00:16:04.054 "dif_type": 0, 00:16:04.054 "dif_is_head_of_md": false, 00:16:04.054 "dif_pi_format": 0 00:16:04.054 } 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "method": "bdev_wait_for_examine" 00:16:04.054 } 00:16:04.054 ] 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "subsystem": "scsi", 00:16:04.054 "config": null 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "subsystem": "scheduler", 00:16:04.054 "config": [ 00:16:04.054 { 00:16:04.054 "method": "framework_set_scheduler", 00:16:04.054 "params": { 00:16:04.054 "name": "static" 00:16:04.054 } 00:16:04.054 } 00:16:04.054 ] 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "subsystem": "vhost_scsi", 00:16:04.054 "config": [] 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "subsystem": "vhost_blk", 00:16:04.054 "config": [] 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "subsystem": "ublk", 00:16:04.054 "config": [ 00:16:04.054 { 00:16:04.054 "method": "ublk_create_target", 00:16:04.054 "params": { 00:16:04.054 "cpumask": "1" 00:16:04.054 } 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "method": "ublk_start_disk", 00:16:04.054 "params": { 00:16:04.054 "bdev_name": "malloc0", 00:16:04.054 "ublk_id": 0, 00:16:04.054 "num_queues": 1, 00:16:04.054 "queue_depth": 128 00:16:04.054 } 00:16:04.054 } 00:16:04.054 ] 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "subsystem": "nbd", 00:16:04.054 "config": [] 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "subsystem": "nvmf", 00:16:04.054 "config": [ 00:16:04.054 { 00:16:04.054 "method": "nvmf_set_config", 00:16:04.054 "params": { 00:16:04.054 "discovery_filter": "match_any", 00:16:04.054 "admin_cmd_passthru": { 00:16:04.054 "identify_ctrlr": false 00:16:04.054 }, 00:16:04.054 "dhchap_digests": [ 00:16:04.054 "sha256", 00:16:04.054 "sha384", 00:16:04.054 "sha512" 00:16:04.054 ], 00:16:04.054 "dhchap_dhgroups": [ 00:16:04.054 "null", 00:16:04.054 "ffdhe2048", 00:16:04.054 "ffdhe3072", 00:16:04.054 "ffdhe4096", 00:16:04.054 "ffdhe6144", 00:16:04.054 "ffdhe8192" 00:16:04.054 ] 00:16:04.054 } 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "method": "nvmf_set_max_subsystems", 00:16:04.054 "params": { 00:16:04.054 "max_subsystems": 1024 00:16:04.054 } 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "method": "nvmf_set_crdt", 00:16:04.054 "params": { 00:16:04.054 "crdt1": 0, 00:16:04.054 "crdt2": 0, 00:16:04.054 "crdt3": 0 00:16:04.054 } 00:16:04.054 } 00:16:04.054 ] 00:16:04.054 }, 00:16:04.054 { 00:16:04.054 "subsystem": "iscsi", 00:16:04.054 "config": [ 00:16:04.054 { 00:16:04.054 "method": "iscsi_set_options", 00:16:04.054 "params": { 00:16:04.054 "node_base": "iqn.2016-06.io.spdk", 00:16:04.054 "max_sessions": 128, 00:16:04.054 "max_connections_per_session": 2, 00:16:04.054 "max_queue_depth": 64, 00:16:04.054 "default_time2wait": 2, 00:16:04.054 "default_time2retain": 20, 00:16:04.054 "first_burst_length": 8192, 00:16:04.054 "immediate_data": true, 00:16:04.054 "allow_duplicated_isid": false, 00:16:04.054 "error_recovery_level": 0, 00:16:04.054 "nop_timeout": 60, 00:16:04.054 "nop_in_interval": 30, 00:16:04.054 "disable_chap": false, 00:16:04.054 "require_chap": false, 00:16:04.054 "mutual_chap": false, 00:16:04.054 "chap_group": 0, 00:16:04.054 "max_large_datain_per_connection": 64, 00:16:04.054 "max_r2t_per_connection": 4, 00:16:04.054 "pdu_pool_size": 36864, 00:16:04.054 "immediate_data_pool_size": 16384, 00:16:04.054 "data_out_pool_size": 2048 00:16:04.054 } 00:16:04.054 } 00:16:04.054 ] 00:16:04.054 } 00:16:04.054 ] 00:16:04.054 }' 00:16:04.054 04:39:53 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72726 00:16:04.054 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 72726 ']' 00:16:04.054 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 72726 00:16:04.054 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:16:04.054 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:04.054 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72726 00:16:04.054 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:04.054 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:04.054 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72726' 00:16:04.054 killing process with pid 72726 00:16:04.054 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 72726 00:16:04.054 04:39:53 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 72726 00:16:05.433 [2024-10-15 04:39:54.838702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:05.433 [2024-10-15 04:39:54.880927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:05.433 [2024-10-15 04:39:54.881069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:05.433 [2024-10-15 04:39:54.886863] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:05.433 [2024-10-15 04:39:54.886917] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:05.433 [2024-10-15 04:39:54.886933] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:05.433 [2024-10-15 04:39:54.886960] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:05.433 [2024-10-15 04:39:54.887102] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:07.367 04:39:56 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72796 00:16:07.367 04:39:56 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72796 00:16:07.367 04:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 72796 ']' 00:16:07.367 04:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.367 04:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.367 04:39:56 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:07.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.367 04:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.367 04:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.367 04:39:56 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:07.367 "subsystems": [ 00:16:07.367 { 00:16:07.367 "subsystem": "fsdev", 00:16:07.367 "config": [ 00:16:07.367 { 00:16:07.367 "method": "fsdev_set_opts", 00:16:07.367 "params": { 00:16:07.367 "fsdev_io_pool_size": 65535, 00:16:07.367 "fsdev_io_cache_size": 256 00:16:07.367 } 00:16:07.367 } 00:16:07.367 ] 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "subsystem": "keyring", 00:16:07.367 "config": [] 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "subsystem": "iobuf", 00:16:07.367 "config": [ 00:16:07.367 { 00:16:07.367 "method": "iobuf_set_options", 00:16:07.367 "params": { 00:16:07.367 "small_pool_count": 8192, 00:16:07.367 "large_pool_count": 1024, 00:16:07.367 "small_bufsize": 8192, 00:16:07.367 "large_bufsize": 135168 00:16:07.367 } 00:16:07.367 } 00:16:07.367 ] 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "subsystem": "sock", 00:16:07.367 "config": [ 00:16:07.367 { 00:16:07.367 "method": "sock_set_default_impl", 00:16:07.367 "params": { 00:16:07.367 "impl_name": "posix" 00:16:07.367 } 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "method": "sock_impl_set_options", 00:16:07.367 "params": { 00:16:07.367 "impl_name": "ssl", 00:16:07.367 "recv_buf_size": 4096, 00:16:07.367 "send_buf_size": 4096, 00:16:07.367 "enable_recv_pipe": true, 00:16:07.367 "enable_quickack": false, 00:16:07.367 "enable_placement_id": 0, 00:16:07.367 "enable_zerocopy_send_server": true, 00:16:07.367 "enable_zerocopy_send_client": false, 00:16:07.367 "zerocopy_threshold": 0, 00:16:07.367 "tls_version": 0, 00:16:07.367 "enable_ktls": false 00:16:07.367 } 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "method": "sock_impl_set_options", 00:16:07.367 "params": { 00:16:07.367 "impl_name": "posix", 00:16:07.367 "recv_buf_size": 2097152, 00:16:07.367 "send_buf_size": 2097152, 00:16:07.367 "enable_recv_pipe": true, 00:16:07.367 "enable_quickack": false, 00:16:07.367 "enable_placement_id": 0, 00:16:07.367 "enable_zerocopy_send_server": true, 00:16:07.367 "enable_zerocopy_send_client": false, 00:16:07.367 "zerocopy_threshold": 0, 00:16:07.367 "tls_version": 0, 00:16:07.367 "enable_ktls": false 00:16:07.367 } 00:16:07.367 } 00:16:07.367 ] 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "subsystem": "vmd", 00:16:07.367 "config": [] 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "subsystem": "accel", 00:16:07.367 "config": [ 00:16:07.367 { 00:16:07.367 "method": "accel_set_options", 00:16:07.367 "params": { 00:16:07.367 "small_cache_size": 128, 00:16:07.367 "large_cache_size": 16, 00:16:07.367 "task_count": 2048, 00:16:07.367 "sequence_count": 2048, 00:16:07.367 "buf_count": 2048 00:16:07.367 } 00:16:07.367 } 00:16:07.367 ] 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "subsystem": "bdev", 00:16:07.367 "config": [ 00:16:07.367 { 00:16:07.367 "method": "bdev_set_options", 00:16:07.367 "params": { 00:16:07.367 "bdev_io_pool_size": 65535, 00:16:07.367 "bdev_io_cache_size": 256, 00:16:07.367 "bdev_auto_examine": true, 00:16:07.367 "iobuf_small_cache_size": 128, 00:16:07.367 "iobuf_large_cache_size": 16 00:16:07.367 } 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "method": "bdev_raid_set_options", 00:16:07.367 "params": { 00:16:07.367 "process_window_size_kb": 1024, 00:16:07.367 "process_max_bandwidth_mb_sec": 0 00:16:07.367 } 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "method": "bdev_iscsi_set_options", 00:16:07.367 "params": { 00:16:07.367 "timeout_sec": 30 00:16:07.367 } 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "method": "bdev_nvme_set_options", 00:16:07.367 "params": { 00:16:07.367 "action_on_timeout": "none", 00:16:07.367 "timeout_us": 0, 00:16:07.367 "timeout_admin_us": 0, 00:16:07.367 "keep_alive_timeout_ms": 10000, 00:16:07.367 "arbitration_burst": 0, 00:16:07.367 "low_priority_weight": 0, 00:16:07.367 "medium_priority_weight": 0, 00:16:07.367 "high_priority_weight": 0, 00:16:07.367 "nvme_adminq_poll_period_us": 10000, 00:16:07.367 "nvme_ioq_poll_period_us": 0, 00:16:07.367 "io_queue_requests": 0, 00:16:07.367 "delay_cmd_submit": true, 00:16:07.367 "transport_retry_count": 4, 00:16:07.367 "bdev_retry_count": 3, 00:16:07.367 "transport_ack_timeout": 0, 00:16:07.367 "ctrlr_loss_timeout_sec": 0, 00:16:07.367 "reconnect_delay_sec": 0, 00:16:07.367 "fast_io_fail_timeout_sec": 0, 00:16:07.367 "disable_auto_failback": false, 00:16:07.367 "generate_uuids": false, 00:16:07.367 "transport_tos": 0, 00:16:07.367 "nvme_error_stat": false, 00:16:07.367 "rdma_srq_size": 0, 00:16:07.367 "io_path_stat": false, 00:16:07.367 "allow_accel_sequence": false, 00:16:07.367 "rdma_max_cq_size": 0, 00:16:07.367 "rdma_cm_event_timeout_ms": 0, 00:16:07.367 "dhchap_digests": [ 00:16:07.367 "sha256", 00:16:07.367 "sha384", 00:16:07.367 "sha512" 00:16:07.367 ], 00:16:07.367 "dhchap_dhgroups": [ 00:16:07.367 "null", 00:16:07.367 "ffdhe2048", 00:16:07.367 "ffdhe3072", 00:16:07.367 "ffdhe4096", 00:16:07.367 "ffdhe6144", 00:16:07.367 "ffdhe8192" 00:16:07.367 ] 00:16:07.367 } 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "method": "bdev_nvme_set_hotplug", 00:16:07.367 "params": { 00:16:07.367 "period_us": 100000, 00:16:07.367 "enable": false 00:16:07.367 } 00:16:07.367 }, 00:16:07.367 { 00:16:07.367 "method": "bdev_malloc_create", 00:16:07.367 "params": { 00:16:07.367 "name": "malloc0", 00:16:07.367 "num_blocks": 8192, 00:16:07.367 "block_size": 4096, 00:16:07.368 "physical_block_size": 4096, 00:16:07.368 "uuid": "ceca2ac4-03aa-4617-bc25-a7f250623d79", 00:16:07.368 "optimal_io_boundary": 0, 00:16:07.368 "md_size": 0, 00:16:07.368 "dif_type": 0, 00:16:07.368 "dif_is_head_of_md": false, 00:16:07.368 "dif_pi_format": 0 00:16:07.368 } 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "method": "bdev_wait_for_examine" 00:16:07.368 } 00:16:07.368 ] 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "subsystem": "scsi", 00:16:07.368 "config": null 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "subsystem": "scheduler", 00:16:07.368 "config": [ 00:16:07.368 { 00:16:07.368 "method": "framework_set_scheduler", 00:16:07.368 "params": { 00:16:07.368 "name": "static" 00:16:07.368 } 00:16:07.368 } 00:16:07.368 ] 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "subsystem": "vhost_scsi", 00:16:07.368 "config": [] 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "subsystem": "vhost_blk", 00:16:07.368 "config": [] 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "subsystem": "ublk", 00:16:07.368 "config": [ 00:16:07.368 { 00:16:07.368 "method": "ublk_create_target", 00:16:07.368 "params": { 00:16:07.368 "cpumask": "1" 00:16:07.368 } 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "method": "ublk_start_disk", 00:16:07.368 "params": { 00:16:07.368 "bdev_name": "malloc0", 00:16:07.368 "ublk_id": 0, 00:16:07.368 "num_queues": 1, 00:16:07.368 "queue_depth": 128 00:16:07.368 } 00:16:07.368 } 00:16:07.368 ] 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "subsystem": "nbd", 00:16:07.368 "config": [] 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "subsystem": "nvmf", 00:16:07.368 "config": [ 00:16:07.368 { 00:16:07.368 "method": "nvmf_set_config", 00:16:07.368 "params": { 00:16:07.368 "discovery_filter": "match_any", 00:16:07.368 "admin_cmd_passthru": { 00:16:07.368 "identify_ctrlr": false 00:16:07.368 }, 00:16:07.368 "dhchap_digests": [ 00:16:07.368 "sha256", 00:16:07.368 "sha384", 00:16:07.368 "sha512" 00:16:07.368 ], 00:16:07.368 "dhchap_dhgroups": [ 00:16:07.368 "null", 00:16:07.368 "ffdhe2048", 00:16:07.368 "ffdhe3072", 00:16:07.368 "ffdhe4096", 00:16:07.368 "ffdhe6144", 00:16:07.368 "ffdhe8192" 00:16:07.368 ] 00:16:07.368 } 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "method": "nvmf_set_max_subsystems", 00:16:07.368 "params": { 00:16:07.368 "max_subsystems": 1024 00:16:07.368 } 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "method": "nvmf_set_crdt", 00:16:07.368 "params": { 00:16:07.368 "crdt1": 0, 00:16:07.368 "crdt2": 0, 00:16:07.368 "crdt3": 0 00:16:07.368 } 00:16:07.368 } 00:16:07.368 ] 00:16:07.368 }, 00:16:07.368 { 00:16:07.368 "subsystem": "iscsi", 00:16:07.368 "config": [ 00:16:07.368 { 00:16:07.368 "method": "iscsi_set_options", 00:16:07.368 "params": { 00:16:07.368 "node_base": "iqn.2016-06.io.spdk", 00:16:07.368 "max_sessions": 128, 00:16:07.368 "max_connections_per_session": 2, 00:16:07.368 "max_queue_depth": 64, 00:16:07.368 "default_time2wait": 2, 00:16:07.368 "default_time2retain": 20, 00:16:07.368 "first_burst_length": 8192, 00:16:07.368 "immediate_data": true, 00:16:07.368 "allow_duplicated_isid": false, 00:16:07.368 "error_recovery_level": 0, 00:16:07.368 "nop_timeout": 60, 00:16:07.368 "nop_in_interval": 30, 00:16:07.368 "disable_chap": false, 00:16:07.368 "require_chap": false, 00:16:07.368 "mutual_chap": false, 00:16:07.368 "chap_group": 0, 00:16:07.368 "max_large_datain_per_connection": 64, 00:16:07.368 "max_r2t_per_connection": 4, 00:16:07.368 "pdu_pool_size": 36864, 00:16:07.368 "immediate_data_pool_size": 16384, 00:16:07.368 "data_out_pool_size": 2048 00:16:07.368 } 00:16:07.368 } 00:16:07.368 ] 00:16:07.368 } 00:16:07.368 ] 00:16:07.368 }' 00:16:07.368 04:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:07.628 [2024-10-15 04:39:56.877311] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:16:07.628 [2024-10-15 04:39:56.877478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72796 ] 00:16:07.628 [2024-10-15 04:39:57.051027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.887 [2024-10-15 04:39:57.171785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.824 [2024-10-15 04:39:58.241833] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:08.824 [2024-10-15 04:39:58.243077] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:08.824 [2024-10-15 04:39:58.249981] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:08.824 [2024-10-15 04:39:58.250066] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:08.824 [2024-10-15 04:39:58.250077] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:08.824 [2024-10-15 04:39:58.250088] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:08.824 [2024-10-15 04:39:58.258929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:08.824 [2024-10-15 04:39:58.258957] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:08.824 [2024-10-15 04:39:58.265846] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:08.824 [2024-10-15 04:39:58.265943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:08.824 [2024-10-15 04:39:58.282856] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:08.824 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.824 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72796 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 72796 ']' 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 72796 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72796 00:16:09.083 killing process with pid 72796 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72796' 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 72796 00:16:09.083 04:39:58 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 72796 00:16:10.494 [2024-10-15 04:39:59.978364] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:10.753 [2024-10-15 04:40:00.012937] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:10.753 [2024-10-15 04:40:00.013099] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:10.753 [2024-10-15 04:40:00.024869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:10.753 [2024-10-15 04:40:00.028880] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:10.753 [2024-10-15 04:40:00.028897] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:10.753 [2024-10-15 04:40:00.028949] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:10.753 [2024-10-15 04:40:00.029136] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:12.658 04:40:01 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:12.658 00:16:12.658 real 0m10.318s 00:16:12.658 user 0m8.075s 00:16:12.658 sys 0m3.092s 00:16:12.658 04:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:12.658 04:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:12.658 ************************************ 00:16:12.658 END TEST test_save_ublk_config 00:16:12.658 ************************************ 00:16:12.658 04:40:01 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72885 00:16:12.658 04:40:01 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:12.658 04:40:01 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:12.658 04:40:01 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72885 00:16:12.658 04:40:01 ublk -- common/autotest_common.sh@831 -- # '[' -z 72885 ']' 00:16:12.658 04:40:01 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.658 04:40:01 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:12.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.658 04:40:01 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.658 04:40:01 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:12.658 04:40:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:12.658 [2024-10-15 04:40:02.090551] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:16:12.658 [2024-10-15 04:40:02.090696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72885 ] 00:16:12.917 [2024-10-15 04:40:02.265766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:12.917 [2024-10-15 04:40:02.385911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.917 [2024-10-15 04:40:02.385948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.853 04:40:03 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.853 04:40:03 ublk -- common/autotest_common.sh@864 -- # return 0 00:16:13.853 04:40:03 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:13.853 04:40:03 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:13.853 04:40:03 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:13.853 04:40:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:13.853 ************************************ 00:16:13.853 START TEST test_create_ublk 00:16:13.853 ************************************ 00:16:13.853 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:16:13.853 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:13.853 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.853 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:13.853 [2024-10-15 04:40:03.330852] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:13.853 [2024-10-15 04:40:03.333596] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:13.853 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.854 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:13.854 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:13.854 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.854 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:14.421 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:14.422 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.422 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:14.422 [2024-10-15 04:40:03.636028] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:14.422 [2024-10-15 04:40:03.636472] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:14.422 [2024-10-15 04:40:03.636492] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:14.422 [2024-10-15 04:40:03.636501] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:14.422 [2024-10-15 04:40:03.646884] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:14.422 [2024-10-15 04:40:03.646913] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:14.422 [2024-10-15 04:40:03.654860] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:14.422 [2024-10-15 04:40:03.655467] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:14.422 [2024-10-15 04:40:03.666953] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:14.422 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:14.422 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.422 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:14.422 04:40:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:14.422 { 00:16:14.422 "ublk_device": "/dev/ublkb0", 00:16:14.422 "id": 0, 00:16:14.422 "queue_depth": 512, 00:16:14.422 "num_queues": 4, 00:16:14.422 "bdev_name": "Malloc0" 00:16:14.422 } 00:16:14.422 ]' 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:14.422 04:40:03 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:14.422 04:40:03 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:14.422 04:40:03 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:14.422 04:40:03 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:14.422 04:40:03 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:14.422 04:40:03 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:14.422 04:40:03 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:14.422 04:40:03 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:14.422 04:40:03 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:14.422 04:40:03 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:14.422 04:40:03 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:14.422 04:40:03 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:14.681 fio: verification read phase will never start because write phase uses all of runtime 00:16:14.681 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:14.681 fio-3.35 00:16:14.681 Starting 1 process 00:16:24.676 00:16:24.676 fio_test: (groupid=0, jobs=1): err= 0: pid=72937: Tue Oct 15 04:40:14 2024 00:16:24.676 write: IOPS=16.3k, BW=63.6MiB/s (66.7MB/s)(636MiB/10001msec); 0 zone resets 00:16:24.676 clat (usec): min=38, max=4029, avg=60.52, stdev=98.03 00:16:24.676 lat (usec): min=38, max=4030, avg=60.99, stdev=98.04 00:16:24.676 clat percentiles (usec): 00:16:24.676 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 51], 20.00th=[ 54], 00:16:24.676 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:16:24.676 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 64], 95.00th=[ 70], 00:16:24.676 | 99.00th=[ 82], 99.50th=[ 87], 99.90th=[ 2057], 99.95th=[ 2769], 00:16:24.676 | 99.99th=[ 3556] 00:16:24.676 bw ( KiB/s): min=62384, max=74896, per=100.00%, avg=65346.11, stdev=3827.95, samples=19 00:16:24.676 iops : min=15596, max=18724, avg=16336.53, stdev=956.99, samples=19 00:16:24.676 lat (usec) : 50=9.28%, 100=90.47%, 250=0.04%, 500=0.01%, 750=0.01% 00:16:24.676 lat (usec) : 1000=0.01% 00:16:24.676 lat (msec) : 2=0.07%, 4=0.10%, 10=0.01% 00:16:24.676 cpu : usr=3.22%, sys=11.10%, ctx=162835, majf=0, minf=795 00:16:24.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:24.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.676 issued rwts: total=0,162835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:24.676 00:16:24.676 Run status group 0 (all jobs): 00:16:24.676 WRITE: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=636MiB (667MB), run=10001-10001msec 00:16:24.676 00:16:24.676 Disk stats (read/write): 00:16:24.676 ublkb0: ios=0/161196, merge=0/0, ticks=0/8546, in_queue=8547, util=99.12% 00:16:24.676 04:40:14 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:24.676 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.676 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:24.676 [2024-10-15 04:40:14.161251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:24.936 [2024-10-15 04:40:14.197282] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:24.936 [2024-10-15 04:40:14.198177] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:24.936 [2024-10-15 04:40:14.202933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:24.936 [2024-10-15 04:40:14.203245] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:24.936 [2024-10-15 04:40:14.203258] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.936 04:40:14 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:24.936 [2024-10-15 04:40:14.225929] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:24.936 request: 00:16:24.936 { 00:16:24.936 "ublk_id": 0, 00:16:24.936 "method": "ublk_stop_disk", 00:16:24.936 "req_id": 1 00:16:24.936 } 00:16:24.936 Got JSON-RPC error response 00:16:24.936 response: 00:16:24.936 { 00:16:24.936 "code": -19, 00:16:24.936 "message": "No such device" 00:16:24.936 } 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.936 04:40:14 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:24.936 [2024-10-15 04:40:14.238012] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:24.936 [2024-10-15 04:40:14.246857] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:24.936 [2024-10-15 04:40:14.246906] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.936 04:40:14 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.936 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.504 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.504 04:40:14 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:25.504 04:40:14 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:25.504 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.504 04:40:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.504 04:40:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.504 04:40:15 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:25.504 04:40:15 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:25.763 04:40:15 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:25.763 04:40:15 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:25.763 04:40:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.763 04:40:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.763 04:40:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.763 04:40:15 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:25.763 04:40:15 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:25.763 ************************************ 00:16:25.763 END TEST test_create_ublk 00:16:25.763 ************************************ 00:16:25.763 04:40:15 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:25.763 00:16:25.763 real 0m11.798s 00:16:25.763 user 0m0.685s 00:16:25.763 sys 0m1.257s 00:16:25.763 04:40:15 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.763 04:40:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.763 04:40:15 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:25.763 04:40:15 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:25.763 04:40:15 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.763 04:40:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.763 ************************************ 00:16:25.763 START TEST test_create_multi_ublk 00:16:25.763 ************************************ 00:16:25.763 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:16:25.763 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:25.763 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.763 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.763 [2024-10-15 04:40:15.207834] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:25.763 [2024-10-15 04:40:15.210560] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:25.763 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.763 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:25.763 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:25.763 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:25.763 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:25.763 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.763 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.084 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.084 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:26.084 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:26.084 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.084 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.084 [2024-10-15 04:40:15.499031] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:26.084 [2024-10-15 04:40:15.499557] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:26.084 [2024-10-15 04:40:15.499574] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:26.084 [2024-10-15 04:40:15.499588] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:26.084 [2024-10-15 04:40:15.506893] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:26.084 [2024-10-15 04:40:15.506925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:26.084 [2024-10-15 04:40:15.514883] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:26.084 [2024-10-15 04:40:15.515492] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:26.084 [2024-10-15 04:40:15.525205] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:26.084 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.084 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:26.084 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:26.084 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:26.084 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.084 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.344 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.344 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:26.344 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:26.344 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.344 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.344 [2024-10-15 04:40:15.813017] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:26.344 [2024-10-15 04:40:15.813496] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:26.344 [2024-10-15 04:40:15.813518] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:26.344 [2024-10-15 04:40:15.813527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:26.344 [2024-10-15 04:40:15.821237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:26.344 [2024-10-15 04:40:15.821263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:26.344 [2024-10-15 04:40:15.827918] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:26.344 [2024-10-15 04:40:15.828483] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:26.344 [2024-10-15 04:40:15.843864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:26.344 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.344 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:26.344 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:26.602 04:40:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:26.602 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.602 04:40:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.861 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.861 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:26.861 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:26.861 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.861 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.861 [2024-10-15 04:40:16.145990] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:26.861 [2024-10-15 04:40:16.146447] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:26.861 [2024-10-15 04:40:16.146465] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:26.861 [2024-10-15 04:40:16.146476] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:26.861 [2024-10-15 04:40:16.153883] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:26.861 [2024-10-15 04:40:16.153914] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:26.861 [2024-10-15 04:40:16.161895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:26.861 [2024-10-15 04:40:16.162510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:26.861 [2024-10-15 04:40:16.170917] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:26.861 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.861 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:26.861 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:26.861 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:26.861 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.861 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.120 [2024-10-15 04:40:16.481992] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:27.120 [2024-10-15 04:40:16.482429] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:27.120 [2024-10-15 04:40:16.482449] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:27.120 [2024-10-15 04:40:16.482457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:27.120 [2024-10-15 04:40:16.491111] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:27.120 [2024-10-15 04:40:16.491135] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:27.120 [2024-10-15 04:40:16.497864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:27.120 [2024-10-15 04:40:16.498520] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:27.120 [2024-10-15 04:40:16.510874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:27.120 { 00:16:27.120 "ublk_device": "/dev/ublkb0", 00:16:27.120 "id": 0, 00:16:27.120 "queue_depth": 512, 00:16:27.120 "num_queues": 4, 00:16:27.120 "bdev_name": "Malloc0" 00:16:27.120 }, 00:16:27.120 { 00:16:27.120 "ublk_device": "/dev/ublkb1", 00:16:27.120 "id": 1, 00:16:27.120 "queue_depth": 512, 00:16:27.120 "num_queues": 4, 00:16:27.120 "bdev_name": "Malloc1" 00:16:27.120 }, 00:16:27.120 { 00:16:27.120 "ublk_device": "/dev/ublkb2", 00:16:27.120 "id": 2, 00:16:27.120 "queue_depth": 512, 00:16:27.120 "num_queues": 4, 00:16:27.120 "bdev_name": "Malloc2" 00:16:27.120 }, 00:16:27.120 { 00:16:27.120 "ublk_device": "/dev/ublkb3", 00:16:27.120 "id": 3, 00:16:27.120 "queue_depth": 512, 00:16:27.120 "num_queues": 4, 00:16:27.120 "bdev_name": "Malloc3" 00:16:27.120 } 00:16:27.120 ]' 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:27.120 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:27.380 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:27.639 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:27.639 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:27.639 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:27.639 04:40:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:27.639 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:27.639 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:27.639 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:27.639 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:27.639 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:27.639 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:27.639 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:27.639 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:27.639 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:27.639 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:27.639 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:27.898 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:28.157 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.157 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:28.157 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.157 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.157 [2024-10-15 04:40:17.409983] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.157 [2024-10-15 04:40:17.458303] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.157 [2024-10-15 04:40:17.459398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.157 [2024-10-15 04:40:17.465865] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.157 [2024-10-15 04:40:17.466190] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:28.157 [2024-10-15 04:40:17.466207] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:28.157 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.157 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.157 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:28.157 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.157 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.157 [2024-10-15 04:40:17.481977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.158 [2024-10-15 04:40:17.529231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.158 [2024-10-15 04:40:17.530369] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.158 [2024-10-15 04:40:17.537876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.158 [2024-10-15 04:40:17.538187] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:28.158 [2024-10-15 04:40:17.538203] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:28.158 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.158 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.158 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:28.158 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.158 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.158 [2024-10-15 04:40:17.552007] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.158 [2024-10-15 04:40:17.589262] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.158 [2024-10-15 04:40:17.590371] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.158 [2024-10-15 04:40:17.597880] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.158 [2024-10-15 04:40:17.598180] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:28.158 [2024-10-15 04:40:17.598197] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:28.158 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.158 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.158 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:28.158 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.158 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.158 [2024-10-15 04:40:17.611949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.158 [2024-10-15 04:40:17.658903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.158 [2024-10-15 04:40:17.659609] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.417 [2024-10-15 04:40:17.666958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.417 [2024-10-15 04:40:17.667247] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:28.417 [2024-10-15 04:40:17.667263] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:28.417 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.417 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:28.417 [2024-10-15 04:40:17.885950] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:28.417 [2024-10-15 04:40:17.893856] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:28.417 [2024-10-15 04:40:17.893911] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:28.417 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:28.417 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.417 04:40:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:28.417 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.417 04:40:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.356 04:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.356 04:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:29.356 04:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:29.356 04:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.356 04:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.642 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.642 04:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:29.642 04:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:29.642 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.642 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.901 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.901 04:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:29.901 04:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:29.901 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.901 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.470 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.471 04:40:19 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:30.471 04:40:19 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:30.471 ************************************ 00:16:30.471 END TEST test_create_multi_ublk 00:16:30.471 ************************************ 00:16:30.471 04:40:19 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:30.471 00:16:30.471 real 0m4.686s 00:16:30.471 user 0m1.033s 00:16:30.471 sys 0m0.241s 00:16:30.471 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.471 04:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.471 04:40:19 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:30.471 04:40:19 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:30.471 04:40:19 ublk -- ublk/ublk.sh@130 -- # killprocess 72885 00:16:30.471 04:40:19 ublk -- common/autotest_common.sh@950 -- # '[' -z 72885 ']' 00:16:30.471 04:40:19 ublk -- common/autotest_common.sh@954 -- # kill -0 72885 00:16:30.471 04:40:19 ublk -- common/autotest_common.sh@955 -- # uname 00:16:30.471 04:40:19 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.471 04:40:19 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72885 00:16:30.471 killing process with pid 72885 00:16:30.471 04:40:19 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.471 04:40:19 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.471 04:40:19 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72885' 00:16:30.471 04:40:19 ublk -- common/autotest_common.sh@969 -- # kill 72885 00:16:30.471 04:40:19 ublk -- common/autotest_common.sh@974 -- # wait 72885 00:16:31.850 [2024-10-15 04:40:21.130822] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:31.850 [2024-10-15 04:40:21.130876] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:33.229 00:16:33.229 real 0m31.057s 00:16:33.229 user 0m44.148s 00:16:33.229 sys 0m11.006s 00:16:33.229 04:40:22 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.229 04:40:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.229 ************************************ 00:16:33.229 END TEST ublk 00:16:33.229 ************************************ 00:16:33.229 04:40:22 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:33.229 04:40:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:33.229 04:40:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.229 04:40:22 -- common/autotest_common.sh@10 -- # set +x 00:16:33.229 ************************************ 00:16:33.229 START TEST ublk_recovery 00:16:33.229 ************************************ 00:16:33.229 04:40:22 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:33.229 * Looking for test storage... 00:16:33.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:33.229 04:40:22 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:33.229 04:40:22 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:16:33.229 04:40:22 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:33.229 04:40:22 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.229 04:40:22 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.230 04:40:22 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:33.230 04:40:22 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.230 04:40:22 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:33.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.230 --rc genhtml_branch_coverage=1 00:16:33.230 --rc genhtml_function_coverage=1 00:16:33.230 --rc genhtml_legend=1 00:16:33.230 --rc geninfo_all_blocks=1 00:16:33.230 --rc geninfo_unexecuted_blocks=1 00:16:33.230 00:16:33.230 ' 00:16:33.230 04:40:22 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:33.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.230 --rc genhtml_branch_coverage=1 00:16:33.230 --rc genhtml_function_coverage=1 00:16:33.230 --rc genhtml_legend=1 00:16:33.230 --rc geninfo_all_blocks=1 00:16:33.230 --rc geninfo_unexecuted_blocks=1 00:16:33.230 00:16:33.230 ' 00:16:33.230 04:40:22 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:33.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.230 --rc genhtml_branch_coverage=1 00:16:33.230 --rc genhtml_function_coverage=1 00:16:33.230 --rc genhtml_legend=1 00:16:33.230 --rc geninfo_all_blocks=1 00:16:33.230 --rc geninfo_unexecuted_blocks=1 00:16:33.230 00:16:33.230 ' 00:16:33.230 04:40:22 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:33.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.230 --rc genhtml_branch_coverage=1 00:16:33.230 --rc genhtml_function_coverage=1 00:16:33.230 --rc genhtml_legend=1 00:16:33.230 --rc geninfo_all_blocks=1 00:16:33.230 --rc geninfo_unexecuted_blocks=1 00:16:33.230 00:16:33.230 ' 00:16:33.230 04:40:22 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:33.230 04:40:22 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:33.230 04:40:22 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:33.230 04:40:22 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:33.230 04:40:22 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:33.230 04:40:22 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:33.230 04:40:22 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:33.230 04:40:22 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:33.230 04:40:22 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:33.230 04:40:22 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:33.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.230 04:40:22 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73316 00:16:33.230 04:40:22 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:33.230 04:40:22 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73316 00:16:33.230 04:40:22 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73316 ']' 00:16:33.230 04:40:22 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.230 04:40:22 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:33.230 04:40:22 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.230 04:40:22 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.230 04:40:22 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.230 04:40:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:33.490 [2024-10-15 04:40:22.822657] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:16:33.490 [2024-10-15 04:40:22.822783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73316 ] 00:16:33.749 [2024-10-15 04:40:22.996382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:33.749 [2024-10-15 04:40:23.112887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.749 [2024-10-15 04:40:23.112925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.687 04:40:24 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.688 04:40:24 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:16:34.688 04:40:24 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:34.688 04:40:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.688 04:40:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:34.688 [2024-10-15 04:40:24.032838] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:34.688 [2024-10-15 04:40:24.035521] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:34.688 04:40:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.688 04:40:24 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:34.688 04:40:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.688 04:40:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:34.688 malloc0 00:16:34.688 04:40:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.688 04:40:24 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:34.688 04:40:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.688 04:40:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:34.688 [2024-10-15 04:40:24.185014] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:34.688 [2024-10-15 04:40:24.185158] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:34.688 [2024-10-15 04:40:24.185174] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:34.688 [2024-10-15 04:40:24.185186] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:34.947 [2024-10-15 04:40:24.193050] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:34.947 [2024-10-15 04:40:24.193109] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:34.947 [2024-10-15 04:40:24.200861] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:34.947 [2024-10-15 04:40:24.201034] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:34.947 [2024-10-15 04:40:24.217879] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:34.947 1 00:16:34.947 04:40:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.947 04:40:24 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:35.885 04:40:25 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73362 00:16:35.885 04:40:25 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:35.885 04:40:25 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:35.885 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.885 fio-3.35 00:16:35.885 Starting 1 process 00:16:41.152 04:40:30 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73316 00:16:41.152 04:40:30 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:46.418 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73316 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:46.418 04:40:35 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73463 00:16:46.418 04:40:35 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:46.418 04:40:35 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.418 04:40:35 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73463 00:16:46.418 04:40:35 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73463 ']' 00:16:46.418 04:40:35 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.418 04:40:35 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.418 04:40:35 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.418 04:40:35 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.418 04:40:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.418 [2024-10-15 04:40:35.355462] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:16:46.418 [2024-10-15 04:40:35.355811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73463 ] 00:16:46.418 [2024-10-15 04:40:35.531502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:46.418 [2024-10-15 04:40:35.655823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.418 [2024-10-15 04:40:35.655882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.353 04:40:36 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.353 04:40:36 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:16:47.353 04:40:36 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:47.353 04:40:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.353 04:40:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.353 [2024-10-15 04:40:36.571878] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:47.353 [2024-10-15 04:40:36.574622] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:47.353 04:40:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.353 04:40:36 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:47.353 04:40:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.353 04:40:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.353 malloc0 00:16:47.353 04:40:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.353 04:40:36 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:47.353 04:40:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.353 04:40:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.353 [2024-10-15 04:40:36.730104] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:47.353 [2024-10-15 04:40:36.730155] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:47.353 [2024-10-15 04:40:36.730167] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:47.353 [2024-10-15 04:40:36.737870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:47.353 [2024-10-15 04:40:36.737898] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:47.353 [2024-10-15 04:40:36.737908] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:47.353 1 00:16:47.353 [2024-10-15 04:40:36.738004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:47.353 04:40:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.353 04:40:36 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73362 00:16:47.353 [2024-10-15 04:40:36.745872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:47.353 [2024-10-15 04:40:36.752303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:47.353 [2024-10-15 04:40:36.759047] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:47.353 [2024-10-15 04:40:36.759077] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:43.624 00:17:43.624 fio_test: (groupid=0, jobs=1): err= 0: pid=73365: Tue Oct 15 04:41:25 2024 00:17:43.624 read: IOPS=22.1k, BW=86.2MiB/s (90.4MB/s)(5173MiB/60001msec) 00:17:43.625 slat (nsec): min=1916, max=1370.5k, avg=7276.27, stdev=2823.73 00:17:43.625 clat (usec): min=942, max=6531.6k, avg=2872.55, stdev=46774.13 00:17:43.625 lat (usec): min=946, max=6531.6k, avg=2879.83, stdev=46774.14 00:17:43.625 clat percentiles (usec): 00:17:43.625 | 1.00th=[ 1958], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2311], 00:17:43.625 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:17:43.625 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2835], 95.00th=[ 3687], 00:17:43.625 | 99.00th=[ 5014], 99.50th=[ 5604], 99.90th=[ 6587], 99.95th=[ 7177], 00:17:43.625 | 99.99th=[12780] 00:17:43.625 bw ( KiB/s): min=21720, max=103360, per=100.00%, avg=98196.79, stdev=10131.01, samples=107 00:17:43.625 iops : min= 5430, max=25840, avg=24549.18, stdev=2532.75, samples=107 00:17:43.625 write: IOPS=22.0k, BW=86.1MiB/s (90.3MB/s)(5167MiB/60001msec); 0 zone resets 00:17:43.625 slat (nsec): min=1932, max=885621, avg=7313.14, stdev=2636.03 00:17:43.625 clat (usec): min=831, max=6531.9k, avg=2914.02, stdev=43964.99 00:17:43.625 lat (usec): min=836, max=6531.9k, avg=2921.33, stdev=43964.99 00:17:43.625 clat percentiles (usec): 00:17:43.625 | 1.00th=[ 1975], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2409], 00:17:43.625 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:17:43.625 | 70.00th=[ 2573], 80.00th=[ 2638], 90.00th=[ 2868], 95.00th=[ 3687], 00:17:43.625 | 99.00th=[ 5014], 99.50th=[ 5604], 99.90th=[ 6652], 99.95th=[ 7308], 00:17:43.625 | 99.99th=[12911] 00:17:43.625 bw ( KiB/s): min=22096, max=103120, per=100.00%, avg=98081.43, stdev=9953.26, samples=107 00:17:43.625 iops : min= 5524, max=25780, avg=24520.34, stdev=2488.31, samples=107 00:17:43.625 lat (usec) : 1000=0.01% 00:17:43.625 lat (msec) : 2=1.25%, 4=95.07%, 10=3.67%, 20=0.01%, >=2000=0.01% 00:17:43.625 cpu : usr=12.18%, sys=32.15%, ctx=114177, majf=0, minf=14 00:17:43.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:43.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.625 issued rwts: total=1324352,1322801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.625 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.625 00:17:43.625 Run status group 0 (all jobs): 00:17:43.625 READ: bw=86.2MiB/s (90.4MB/s), 86.2MiB/s-86.2MiB/s (90.4MB/s-90.4MB/s), io=5173MiB (5425MB), run=60001-60001msec 00:17:43.625 WRITE: bw=86.1MiB/s (90.3MB/s), 86.1MiB/s-86.1MiB/s (90.3MB/s-90.3MB/s), io=5167MiB (5418MB), run=60001-60001msec 00:17:43.625 00:17:43.625 Disk stats (read/write): 00:17:43.625 ublkb1: ios=1321536/1320000, merge=0/0, ticks=3683641/3601548, in_queue=7285190, util=99.94% 00:17:43.625 04:41:25 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.625 [2024-10-15 04:41:25.508444] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:43.625 [2024-10-15 04:41:25.548970] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:43.625 [2024-10-15 04:41:25.569961] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:43.625 [2024-10-15 04:41:25.577861] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:43.625 [2024-10-15 04:41:25.577983] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:43.625 [2024-10-15 04:41:25.577997] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.625 04:41:25 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.625 [2024-10-15 04:41:25.591935] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:43.625 [2024-10-15 04:41:25.599846] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:43.625 [2024-10-15 04:41:25.599890] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.625 04:41:25 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:43.625 04:41:25 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:43.625 04:41:25 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73463 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 73463 ']' 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 73463 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73463 00:17:43.625 killing process with pid 73463 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73463' 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@969 -- # kill 73463 00:17:43.625 04:41:25 ublk_recovery -- common/autotest_common.sh@974 -- # wait 73463 00:17:43.625 [2024-10-15 04:41:27.284640] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:43.625 [2024-10-15 04:41:27.284704] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:43.625 ************************************ 00:17:43.625 END TEST ublk_recovery 00:17:43.625 ************************************ 00:17:43.625 00:17:43.625 real 1m6.265s 00:17:43.625 user 1m49.702s 00:17:43.625 sys 0m38.462s 00:17:43.625 04:41:28 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:43.625 04:41:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.625 04:41:28 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:43.625 04:41:28 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:43.625 04:41:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:43.625 04:41:28 -- common/autotest_common.sh@10 -- # set +x 00:17:43.625 04:41:28 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:43.625 04:41:28 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:43.625 04:41:28 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:43.625 04:41:28 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:43.625 04:41:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:43.625 04:41:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:43.625 04:41:28 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:43.625 04:41:28 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:43.625 04:41:28 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:43.625 04:41:28 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:17:43.625 04:41:28 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:43.625 04:41:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:43.625 04:41:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:43.625 04:41:28 -- common/autotest_common.sh@10 -- # set +x 00:17:43.625 ************************************ 00:17:43.625 START TEST ftl 00:17:43.625 ************************************ 00:17:43.625 04:41:28 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:43.625 * Looking for test storage... 00:17:43.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:43.625 04:41:29 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:43.625 04:41:29 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:17:43.625 04:41:29 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:43.625 04:41:29 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:43.625 04:41:29 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.625 04:41:29 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.625 04:41:29 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.625 04:41:29 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.625 04:41:29 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.625 04:41:29 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.625 04:41:29 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.625 04:41:29 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.625 04:41:29 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.625 04:41:29 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.625 04:41:29 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.625 04:41:29 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:43.625 04:41:29 ftl -- scripts/common.sh@345 -- # : 1 00:17:43.625 04:41:29 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.625 04:41:29 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.625 04:41:29 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:43.625 04:41:29 ftl -- scripts/common.sh@353 -- # local d=1 00:17:43.626 04:41:29 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.626 04:41:29 ftl -- scripts/common.sh@355 -- # echo 1 00:17:43.626 04:41:29 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.626 04:41:29 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:43.626 04:41:29 ftl -- scripts/common.sh@353 -- # local d=2 00:17:43.626 04:41:29 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.626 04:41:29 ftl -- scripts/common.sh@355 -- # echo 2 00:17:43.626 04:41:29 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.626 04:41:29 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.626 04:41:29 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.626 04:41:29 ftl -- scripts/common.sh@368 -- # return 0 00:17:43.626 04:41:29 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.626 04:41:29 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:43.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.626 --rc genhtml_branch_coverage=1 00:17:43.626 --rc genhtml_function_coverage=1 00:17:43.626 --rc genhtml_legend=1 00:17:43.626 --rc geninfo_all_blocks=1 00:17:43.626 --rc geninfo_unexecuted_blocks=1 00:17:43.626 00:17:43.626 ' 00:17:43.626 04:41:29 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:43.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.626 --rc genhtml_branch_coverage=1 00:17:43.626 --rc genhtml_function_coverage=1 00:17:43.626 --rc genhtml_legend=1 00:17:43.626 --rc geninfo_all_blocks=1 00:17:43.626 --rc geninfo_unexecuted_blocks=1 00:17:43.626 00:17:43.626 ' 00:17:43.626 04:41:29 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:43.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.626 --rc genhtml_branch_coverage=1 00:17:43.626 --rc genhtml_function_coverage=1 00:17:43.626 --rc genhtml_legend=1 00:17:43.626 --rc geninfo_all_blocks=1 00:17:43.626 --rc geninfo_unexecuted_blocks=1 00:17:43.626 00:17:43.626 ' 00:17:43.626 04:41:29 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:43.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.626 --rc genhtml_branch_coverage=1 00:17:43.626 --rc genhtml_function_coverage=1 00:17:43.626 --rc genhtml_legend=1 00:17:43.626 --rc geninfo_all_blocks=1 00:17:43.626 --rc geninfo_unexecuted_blocks=1 00:17:43.626 00:17:43.626 ' 00:17:43.626 04:41:29 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:43.626 04:41:29 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:43.626 04:41:29 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:43.626 04:41:29 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:43.626 04:41:29 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:43.626 04:41:29 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:43.626 04:41:29 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.626 04:41:29 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:43.626 04:41:29 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:43.626 04:41:29 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.626 04:41:29 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.626 04:41:29 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:43.626 04:41:29 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:43.626 04:41:29 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:43.626 04:41:29 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:43.626 04:41:29 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:43.626 04:41:29 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:43.626 04:41:29 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.626 04:41:29 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:43.626 04:41:29 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:43.626 04:41:29 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:43.626 04:41:29 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:43.626 04:41:29 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:43.626 04:41:29 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:43.626 04:41:29 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:43.626 04:41:29 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:43.626 04:41:29 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:43.626 04:41:29 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:43.626 04:41:29 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:43.626 04:41:29 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.626 04:41:29 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:43.626 04:41:29 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:43.626 04:41:29 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:43.626 04:41:29 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:43.626 04:41:29 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:43.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:43.626 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:43.626 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:43.626 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:43.626 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:43.626 04:41:29 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:43.626 04:41:29 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74282 00:17:43.626 04:41:29 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74282 00:17:43.626 04:41:29 ftl -- common/autotest_common.sh@831 -- # '[' -z 74282 ']' 00:17:43.626 04:41:29 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.626 04:41:29 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:43.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.626 04:41:29 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.626 04:41:29 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:43.626 04:41:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:43.626 [2024-10-15 04:41:30.084796] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:17:43.626 [2024-10-15 04:41:30.084942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74282 ] 00:17:43.626 [2024-10-15 04:41:30.257924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.626 [2024-10-15 04:41:30.376840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.626 04:41:30 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.626 04:41:30 ftl -- common/autotest_common.sh@864 -- # return 0 00:17:43.626 04:41:30 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:43.626 04:41:31 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:43.626 04:41:32 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:43.626 04:41:32 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:43.626 04:41:32 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:43.627 04:41:32 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:43.627 04:41:32 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:43.627 04:41:32 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:43.627 04:41:32 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:43.627 04:41:32 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:43.627 04:41:32 ftl -- ftl/ftl.sh@50 -- # break 00:17:43.627 04:41:32 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:43.627 04:41:32 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:43.627 04:41:32 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:43.627 04:41:32 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:43.627 04:41:33 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:43.627 04:41:33 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:43.627 04:41:33 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:43.627 04:41:33 ftl -- ftl/ftl.sh@63 -- # break 00:17:43.627 04:41:33 ftl -- ftl/ftl.sh@66 -- # killprocess 74282 00:17:43.627 04:41:33 ftl -- common/autotest_common.sh@950 -- # '[' -z 74282 ']' 00:17:43.627 04:41:33 ftl -- common/autotest_common.sh@954 -- # kill -0 74282 00:17:43.627 04:41:33 ftl -- common/autotest_common.sh@955 -- # uname 00:17:43.627 04:41:33 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.627 04:41:33 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74282 00:17:43.627 killing process with pid 74282 00:17:43.627 04:41:33 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.627 04:41:33 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.627 04:41:33 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74282' 00:17:43.627 04:41:33 ftl -- common/autotest_common.sh@969 -- # kill 74282 00:17:43.627 04:41:33 ftl -- common/autotest_common.sh@974 -- # wait 74282 00:17:46.181 04:41:35 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:46.181 04:41:35 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:46.181 04:41:35 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:46.181 04:41:35 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:46.181 04:41:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:46.181 ************************************ 00:17:46.181 START TEST ftl_fio_basic 00:17:46.181 ************************************ 00:17:46.181 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:46.441 * Looking for test storage... 00:17:46.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:46.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.441 --rc genhtml_branch_coverage=1 00:17:46.441 --rc genhtml_function_coverage=1 00:17:46.441 --rc genhtml_legend=1 00:17:46.441 --rc geninfo_all_blocks=1 00:17:46.441 --rc geninfo_unexecuted_blocks=1 00:17:46.441 00:17:46.441 ' 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:46.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.441 --rc genhtml_branch_coverage=1 00:17:46.441 --rc genhtml_function_coverage=1 00:17:46.441 --rc genhtml_legend=1 00:17:46.441 --rc geninfo_all_blocks=1 00:17:46.441 --rc geninfo_unexecuted_blocks=1 00:17:46.441 00:17:46.441 ' 00:17:46.441 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:46.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.441 --rc genhtml_branch_coverage=1 00:17:46.441 --rc genhtml_function_coverage=1 00:17:46.441 --rc genhtml_legend=1 00:17:46.442 --rc geninfo_all_blocks=1 00:17:46.442 --rc geninfo_unexecuted_blocks=1 00:17:46.442 00:17:46.442 ' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:46.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.442 --rc genhtml_branch_coverage=1 00:17:46.442 --rc genhtml_function_coverage=1 00:17:46.442 --rc genhtml_legend=1 00:17:46.442 --rc geninfo_all_blocks=1 00:17:46.442 --rc geninfo_unexecuted_blocks=1 00:17:46.442 00:17:46.442 ' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74432 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74432 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 74432 ']' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:46.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.442 04:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:46.442 [2024-10-15 04:41:35.943738] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:17:46.442 [2024-10-15 04:41:35.943882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74432 ] 00:17:46.702 [2024-10-15 04:41:36.115768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:46.960 [2024-10-15 04:41:36.239034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.960 [2024-10-15 04:41:36.239184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.960 [2024-10-15 04:41:36.239215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.898 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.898 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:17:47.898 04:41:37 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:47.898 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:47.898 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:47.898 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:47.898 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:47.898 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:48.157 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:48.157 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:48.157 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:48.157 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:48.157 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:48.157 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:48.157 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:48.157 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:48.416 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:48.416 { 00:17:48.416 "name": "nvme0n1", 00:17:48.416 "aliases": [ 00:17:48.416 "ae1ecac6-8e6a-4a16-bfb5-f11815e4df6e" 00:17:48.416 ], 00:17:48.416 "product_name": "NVMe disk", 00:17:48.416 "block_size": 4096, 00:17:48.416 "num_blocks": 1310720, 00:17:48.416 "uuid": "ae1ecac6-8e6a-4a16-bfb5-f11815e4df6e", 00:17:48.416 "numa_id": -1, 00:17:48.416 "assigned_rate_limits": { 00:17:48.416 "rw_ios_per_sec": 0, 00:17:48.416 "rw_mbytes_per_sec": 0, 00:17:48.416 "r_mbytes_per_sec": 0, 00:17:48.416 "w_mbytes_per_sec": 0 00:17:48.416 }, 00:17:48.416 "claimed": false, 00:17:48.416 "zoned": false, 00:17:48.416 "supported_io_types": { 00:17:48.416 "read": true, 00:17:48.416 "write": true, 00:17:48.416 "unmap": true, 00:17:48.416 "flush": true, 00:17:48.416 "reset": true, 00:17:48.416 "nvme_admin": true, 00:17:48.416 "nvme_io": true, 00:17:48.416 "nvme_io_md": false, 00:17:48.416 "write_zeroes": true, 00:17:48.416 "zcopy": false, 00:17:48.416 "get_zone_info": false, 00:17:48.416 "zone_management": false, 00:17:48.416 "zone_append": false, 00:17:48.416 "compare": true, 00:17:48.416 "compare_and_write": false, 00:17:48.416 "abort": true, 00:17:48.416 "seek_hole": false, 00:17:48.416 "seek_data": false, 00:17:48.416 "copy": true, 00:17:48.416 "nvme_iov_md": false 00:17:48.416 }, 00:17:48.416 "driver_specific": { 00:17:48.416 "nvme": [ 00:17:48.416 { 00:17:48.416 "pci_address": "0000:00:11.0", 00:17:48.416 "trid": { 00:17:48.416 "trtype": "PCIe", 00:17:48.416 "traddr": "0000:00:11.0" 00:17:48.417 }, 00:17:48.417 "ctrlr_data": { 00:17:48.417 "cntlid": 0, 00:17:48.417 "vendor_id": "0x1b36", 00:17:48.417 "model_number": "QEMU NVMe Ctrl", 00:17:48.417 "serial_number": "12341", 00:17:48.417 "firmware_revision": "8.0.0", 00:17:48.417 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:48.417 "oacs": { 00:17:48.417 "security": 0, 00:17:48.417 "format": 1, 00:17:48.417 "firmware": 0, 00:17:48.417 "ns_manage": 1 00:17:48.417 }, 00:17:48.417 "multi_ctrlr": false, 00:17:48.417 "ana_reporting": false 00:17:48.417 }, 00:17:48.417 "vs": { 00:17:48.417 "nvme_version": "1.4" 00:17:48.417 }, 00:17:48.417 "ns_data": { 00:17:48.417 "id": 1, 00:17:48.417 "can_share": false 00:17:48.417 } 00:17:48.417 } 00:17:48.417 ], 00:17:48.417 "mp_policy": "active_passive" 00:17:48.417 } 00:17:48.417 } 00:17:48.417 ]' 00:17:48.417 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:48.417 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:48.417 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:48.417 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:48.417 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:48.417 04:41:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:17:48.417 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:48.417 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:48.417 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:48.417 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:48.417 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:48.681 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:48.681 04:41:37 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:48.939 04:41:38 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=d9942d40-6b5e-4544-97c0-a9fa6b732a30 00:17:48.940 04:41:38 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d9942d40-6b5e-4544-97c0-a9fa6b732a30 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:49.199 { 00:17:49.199 "name": "dc94ffd9-8701-4ed8-97f4-ce4ca4638550", 00:17:49.199 "aliases": [ 00:17:49.199 "lvs/nvme0n1p0" 00:17:49.199 ], 00:17:49.199 "product_name": "Logical Volume", 00:17:49.199 "block_size": 4096, 00:17:49.199 "num_blocks": 26476544, 00:17:49.199 "uuid": "dc94ffd9-8701-4ed8-97f4-ce4ca4638550", 00:17:49.199 "assigned_rate_limits": { 00:17:49.199 "rw_ios_per_sec": 0, 00:17:49.199 "rw_mbytes_per_sec": 0, 00:17:49.199 "r_mbytes_per_sec": 0, 00:17:49.199 "w_mbytes_per_sec": 0 00:17:49.199 }, 00:17:49.199 "claimed": false, 00:17:49.199 "zoned": false, 00:17:49.199 "supported_io_types": { 00:17:49.199 "read": true, 00:17:49.199 "write": true, 00:17:49.199 "unmap": true, 00:17:49.199 "flush": false, 00:17:49.199 "reset": true, 00:17:49.199 "nvme_admin": false, 00:17:49.199 "nvme_io": false, 00:17:49.199 "nvme_io_md": false, 00:17:49.199 "write_zeroes": true, 00:17:49.199 "zcopy": false, 00:17:49.199 "get_zone_info": false, 00:17:49.199 "zone_management": false, 00:17:49.199 "zone_append": false, 00:17:49.199 "compare": false, 00:17:49.199 "compare_and_write": false, 00:17:49.199 "abort": false, 00:17:49.199 "seek_hole": true, 00:17:49.199 "seek_data": true, 00:17:49.199 "copy": false, 00:17:49.199 "nvme_iov_md": false 00:17:49.199 }, 00:17:49.199 "driver_specific": { 00:17:49.199 "lvol": { 00:17:49.199 "lvol_store_uuid": "d9942d40-6b5e-4544-97c0-a9fa6b732a30", 00:17:49.199 "base_bdev": "nvme0n1", 00:17:49.199 "thin_provision": true, 00:17:49.199 "num_allocated_clusters": 0, 00:17:49.199 "snapshot": false, 00:17:49.199 "clone": false, 00:17:49.199 "esnap_clone": false 00:17:49.199 } 00:17:49.199 } 00:17:49.199 } 00:17:49.199 ]' 00:17:49.199 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:49.458 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:49.458 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:49.458 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:49.458 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:49.458 04:41:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:49.458 04:41:38 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:49.458 04:41:38 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:49.458 04:41:38 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:49.717 04:41:39 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:49.717 04:41:39 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:49.717 04:41:39 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:49.717 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:49.717 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:49.717 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:49.717 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:49.717 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:49.977 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:49.977 { 00:17:49.977 "name": "dc94ffd9-8701-4ed8-97f4-ce4ca4638550", 00:17:49.977 "aliases": [ 00:17:49.977 "lvs/nvme0n1p0" 00:17:49.977 ], 00:17:49.977 "product_name": "Logical Volume", 00:17:49.977 "block_size": 4096, 00:17:49.977 "num_blocks": 26476544, 00:17:49.977 "uuid": "dc94ffd9-8701-4ed8-97f4-ce4ca4638550", 00:17:49.977 "assigned_rate_limits": { 00:17:49.977 "rw_ios_per_sec": 0, 00:17:49.977 "rw_mbytes_per_sec": 0, 00:17:49.977 "r_mbytes_per_sec": 0, 00:17:49.977 "w_mbytes_per_sec": 0 00:17:49.977 }, 00:17:49.977 "claimed": false, 00:17:49.977 "zoned": false, 00:17:49.977 "supported_io_types": { 00:17:49.977 "read": true, 00:17:49.977 "write": true, 00:17:49.977 "unmap": true, 00:17:49.977 "flush": false, 00:17:49.977 "reset": true, 00:17:49.977 "nvme_admin": false, 00:17:49.977 "nvme_io": false, 00:17:49.977 "nvme_io_md": false, 00:17:49.977 "write_zeroes": true, 00:17:49.977 "zcopy": false, 00:17:49.977 "get_zone_info": false, 00:17:49.977 "zone_management": false, 00:17:49.977 "zone_append": false, 00:17:49.977 "compare": false, 00:17:49.977 "compare_and_write": false, 00:17:49.977 "abort": false, 00:17:49.977 "seek_hole": true, 00:17:49.977 "seek_data": true, 00:17:49.977 "copy": false, 00:17:49.977 "nvme_iov_md": false 00:17:49.977 }, 00:17:49.977 "driver_specific": { 00:17:49.977 "lvol": { 00:17:49.977 "lvol_store_uuid": "d9942d40-6b5e-4544-97c0-a9fa6b732a30", 00:17:49.977 "base_bdev": "nvme0n1", 00:17:49.977 "thin_provision": true, 00:17:49.977 "num_allocated_clusters": 0, 00:17:49.977 "snapshot": false, 00:17:49.977 "clone": false, 00:17:49.977 "esnap_clone": false 00:17:49.977 } 00:17:49.977 } 00:17:49.977 } 00:17:49.977 ]' 00:17:49.977 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:49.977 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:49.977 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:49.977 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:49.977 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:49.977 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:49.977 04:41:39 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:49.977 04:41:39 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:50.236 04:41:39 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:50.236 04:41:39 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:50.236 04:41:39 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:50.236 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:50.236 04:41:39 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:50.236 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:50.236 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:50.236 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:50.236 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:50.236 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc94ffd9-8701-4ed8-97f4-ce4ca4638550 00:17:50.496 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:50.496 { 00:17:50.496 "name": "dc94ffd9-8701-4ed8-97f4-ce4ca4638550", 00:17:50.496 "aliases": [ 00:17:50.496 "lvs/nvme0n1p0" 00:17:50.496 ], 00:17:50.496 "product_name": "Logical Volume", 00:17:50.496 "block_size": 4096, 00:17:50.496 "num_blocks": 26476544, 00:17:50.496 "uuid": "dc94ffd9-8701-4ed8-97f4-ce4ca4638550", 00:17:50.496 "assigned_rate_limits": { 00:17:50.496 "rw_ios_per_sec": 0, 00:17:50.496 "rw_mbytes_per_sec": 0, 00:17:50.496 "r_mbytes_per_sec": 0, 00:17:50.496 "w_mbytes_per_sec": 0 00:17:50.496 }, 00:17:50.496 "claimed": false, 00:17:50.496 "zoned": false, 00:17:50.496 "supported_io_types": { 00:17:50.496 "read": true, 00:17:50.496 "write": true, 00:17:50.496 "unmap": true, 00:17:50.496 "flush": false, 00:17:50.496 "reset": true, 00:17:50.496 "nvme_admin": false, 00:17:50.496 "nvme_io": false, 00:17:50.496 "nvme_io_md": false, 00:17:50.496 "write_zeroes": true, 00:17:50.496 "zcopy": false, 00:17:50.496 "get_zone_info": false, 00:17:50.496 "zone_management": false, 00:17:50.496 "zone_append": false, 00:17:50.496 "compare": false, 00:17:50.496 "compare_and_write": false, 00:17:50.496 "abort": false, 00:17:50.496 "seek_hole": true, 00:17:50.496 "seek_data": true, 00:17:50.496 "copy": false, 00:17:50.496 "nvme_iov_md": false 00:17:50.496 }, 00:17:50.496 "driver_specific": { 00:17:50.496 "lvol": { 00:17:50.496 "lvol_store_uuid": "d9942d40-6b5e-4544-97c0-a9fa6b732a30", 00:17:50.496 "base_bdev": "nvme0n1", 00:17:50.496 "thin_provision": true, 00:17:50.496 "num_allocated_clusters": 0, 00:17:50.496 "snapshot": false, 00:17:50.496 "clone": false, 00:17:50.496 "esnap_clone": false 00:17:50.496 } 00:17:50.496 } 00:17:50.496 } 00:17:50.496 ]' 00:17:50.496 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:50.496 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:50.496 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:50.496 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:50.496 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:50.496 04:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:50.496 04:41:39 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:50.496 04:41:39 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:50.496 04:41:39 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dc94ffd9-8701-4ed8-97f4-ce4ca4638550 -c nvc0n1p0 --l2p_dram_limit 60 00:17:50.757 [2024-10-15 04:41:40.047352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.757 [2024-10-15 04:41:40.047415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:50.757 [2024-10-15 04:41:40.047445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:50.757 [2024-10-15 04:41:40.047464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.757 [2024-10-15 04:41:40.047578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.757 [2024-10-15 04:41:40.047601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.757 [2024-10-15 04:41:40.047620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:50.757 [2024-10-15 04:41:40.047643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.757 [2024-10-15 04:41:40.047687] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:50.757 [2024-10-15 04:41:40.048875] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:50.757 [2024-10-15 04:41:40.048933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.757 [2024-10-15 04:41:40.048957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.757 [2024-10-15 04:41:40.048976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:17:50.757 [2024-10-15 04:41:40.048992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.757 [2024-10-15 04:41:40.049176] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0072938c-daca-472e-b541-b52dbaeaf80e 00:17:50.757 [2024-10-15 04:41:40.051105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.757 [2024-10-15 04:41:40.051152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:50.757 [2024-10-15 04:41:40.051173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:50.757 [2024-10-15 04:41:40.051197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.757 [2024-10-15 04:41:40.060638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.757 [2024-10-15 04:41:40.060905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.757 [2024-10-15 04:41:40.060939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.329 ms 00:17:50.757 [2024-10-15 04:41:40.060960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.757 [2024-10-15 04:41:40.061124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.757 [2024-10-15 04:41:40.061164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.757 [2024-10-15 04:41:40.061197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:17:50.757 [2024-10-15 04:41:40.061225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.757 [2024-10-15 04:41:40.061332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.757 [2024-10-15 04:41:40.061360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:50.757 [2024-10-15 04:41:40.061380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:50.757 [2024-10-15 04:41:40.061401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.757 [2024-10-15 04:41:40.061536] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:50.757 [2024-10-15 04:41:40.067644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.757 [2024-10-15 04:41:40.067693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.757 [2024-10-15 04:41:40.067721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.195 ms 00:17:50.757 [2024-10-15 04:41:40.067737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.757 [2024-10-15 04:41:40.067795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.757 [2024-10-15 04:41:40.067838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:50.757 [2024-10-15 04:41:40.067861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:50.757 [2024-10-15 04:41:40.067878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.757 [2024-10-15 04:41:40.067979] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:50.757 [2024-10-15 04:41:40.068168] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:50.757 [2024-10-15 04:41:40.068204] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:50.757 [2024-10-15 04:41:40.068241] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:50.757 [2024-10-15 04:41:40.068266] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:50.758 [2024-10-15 04:41:40.068288] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:50.758 [2024-10-15 04:41:40.068310] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:50.758 [2024-10-15 04:41:40.068326] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:50.758 [2024-10-15 04:41:40.068346] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:50.758 [2024-10-15 04:41:40.068375] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:50.758 [2024-10-15 04:41:40.068396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.758 [2024-10-15 04:41:40.068412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:50.758 [2024-10-15 04:41:40.068432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:17:50.758 [2024-10-15 04:41:40.068457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.758 [2024-10-15 04:41:40.068586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.758 [2024-10-15 04:41:40.068605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:50.758 [2024-10-15 04:41:40.068627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:50.758 [2024-10-15 04:41:40.068643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.758 [2024-10-15 04:41:40.068785] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:50.758 [2024-10-15 04:41:40.068810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:50.758 [2024-10-15 04:41:40.068831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.758 [2024-10-15 04:41:40.068846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.758 [2024-10-15 04:41:40.069079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:50.758 [2024-10-15 04:41:40.069178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:50.758 [2024-10-15 04:41:40.069241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:50.758 [2024-10-15 04:41:40.069291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:50.758 [2024-10-15 04:41:40.069426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:50.758 [2024-10-15 04:41:40.069484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.758 [2024-10-15 04:41:40.069538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:50.758 [2024-10-15 04:41:40.069655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:50.758 [2024-10-15 04:41:40.069728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.758 [2024-10-15 04:41:40.069777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:50.758 [2024-10-15 04:41:40.069857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:50.758 [2024-10-15 04:41:40.070013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.758 [2024-10-15 04:41:40.070076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:50.758 [2024-10-15 04:41:40.070188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:50.758 [2024-10-15 04:41:40.070260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.758 [2024-10-15 04:41:40.070511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:50.758 [2024-10-15 04:41:40.070602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:50.758 [2024-10-15 04:41:40.070647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.758 [2024-10-15 04:41:40.070670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:50.758 [2024-10-15 04:41:40.070687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:50.758 [2024-10-15 04:41:40.070706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.758 [2024-10-15 04:41:40.070720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:50.758 [2024-10-15 04:41:40.070739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:50.758 [2024-10-15 04:41:40.070754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.758 [2024-10-15 04:41:40.070774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:50.758 [2024-10-15 04:41:40.070788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:50.758 [2024-10-15 04:41:40.070806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.758 [2024-10-15 04:41:40.070823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:50.758 [2024-10-15 04:41:40.070901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:50.758 [2024-10-15 04:41:40.070937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.758 [2024-10-15 04:41:40.070957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:50.758 [2024-10-15 04:41:40.070993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:50.758 [2024-10-15 04:41:40.071015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.758 [2024-10-15 04:41:40.071032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:50.758 [2024-10-15 04:41:40.071051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:50.758 [2024-10-15 04:41:40.071067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.758 [2024-10-15 04:41:40.071088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:50.758 [2024-10-15 04:41:40.071104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:50.758 [2024-10-15 04:41:40.071125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.758 [2024-10-15 04:41:40.071142] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:50.758 [2024-10-15 04:41:40.071165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:50.758 [2024-10-15 04:41:40.071182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.758 [2024-10-15 04:41:40.071201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.758 [2024-10-15 04:41:40.071220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:50.758 [2024-10-15 04:41:40.071245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:50.758 [2024-10-15 04:41:40.071260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:50.758 [2024-10-15 04:41:40.071279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:50.758 [2024-10-15 04:41:40.071297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:50.758 [2024-10-15 04:41:40.071318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:50.758 [2024-10-15 04:41:40.071347] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:50.758 [2024-10-15 04:41:40.071374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.758 [2024-10-15 04:41:40.071393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:50.758 [2024-10-15 04:41:40.071414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:50.758 [2024-10-15 04:41:40.071432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:50.758 [2024-10-15 04:41:40.071455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:50.758 [2024-10-15 04:41:40.071474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:50.758 [2024-10-15 04:41:40.071494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:50.758 [2024-10-15 04:41:40.071512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:50.758 [2024-10-15 04:41:40.071535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:50.758 [2024-10-15 04:41:40.071553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:50.758 [2024-10-15 04:41:40.071579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:50.758 [2024-10-15 04:41:40.071598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:50.758 [2024-10-15 04:41:40.071622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:50.758 [2024-10-15 04:41:40.071639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:50.758 [2024-10-15 04:41:40.071662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:50.758 [2024-10-15 04:41:40.071679] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:50.758 [2024-10-15 04:41:40.071701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.758 [2024-10-15 04:41:40.071720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:50.758 [2024-10-15 04:41:40.071755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:50.758 [2024-10-15 04:41:40.071773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:50.758 [2024-10-15 04:41:40.071809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:50.758 [2024-10-15 04:41:40.071828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.758 [2024-10-15 04:41:40.071865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:50.758 [2024-10-15 04:41:40.071886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.128 ms 00:17:50.758 [2024-10-15 04:41:40.071907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.758 [2024-10-15 04:41:40.072092] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:50.758 [2024-10-15 04:41:40.072123] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:54.951 [2024-10-15 04:41:44.080690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.951 [2024-10-15 04:41:44.080758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:54.951 [2024-10-15 04:41:44.080776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4015.103 ms 00:17:54.951 [2024-10-15 04:41:44.080793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.951 [2024-10-15 04:41:44.119245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.951 [2024-10-15 04:41:44.119311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.951 [2024-10-15 04:41:44.119328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.167 ms 00:17:54.951 [2024-10-15 04:41:44.119341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.951 [2024-10-15 04:41:44.119515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.951 [2024-10-15 04:41:44.119533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.951 [2024-10-15 04:41:44.119545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:54.951 [2024-10-15 04:41:44.119560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.951 [2024-10-15 04:41:44.180985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.951 [2024-10-15 04:41:44.181039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.951 [2024-10-15 04:41:44.181055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.470 ms 00:17:54.951 [2024-10-15 04:41:44.181068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.951 [2024-10-15 04:41:44.181119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.951 [2024-10-15 04:41:44.181133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.951 [2024-10-15 04:41:44.181144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.951 [2024-10-15 04:41:44.181164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.951 [2024-10-15 04:41:44.181674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.951 [2024-10-15 04:41:44.181691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.951 [2024-10-15 04:41:44.181702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:17:54.951 [2024-10-15 04:41:44.181716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.951 [2024-10-15 04:41:44.181872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.951 [2024-10-15 04:41:44.181890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.951 [2024-10-15 04:41:44.181901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:54.951 [2024-10-15 04:41:44.181917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.951 [2024-10-15 04:41:44.204412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.951 [2024-10-15 04:41:44.204460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.951 [2024-10-15 04:41:44.204474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.489 ms 00:17:54.951 [2024-10-15 04:41:44.204488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.951 [2024-10-15 04:41:44.217660] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:54.951 [2024-10-15 04:41:44.234396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.951 [2024-10-15 04:41:44.234469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.951 [2024-10-15 04:41:44.234489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.821 ms 00:17:54.951 [2024-10-15 04:41:44.234501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.952 [2024-10-15 04:41:44.324605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.952 [2024-10-15 04:41:44.324693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:54.952 [2024-10-15 04:41:44.324716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.184 ms 00:17:54.952 [2024-10-15 04:41:44.324727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.952 [2024-10-15 04:41:44.325021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.952 [2024-10-15 04:41:44.325039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.952 [2024-10-15 04:41:44.325057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:54.952 [2024-10-15 04:41:44.325068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.952 [2024-10-15 04:41:44.361847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.952 [2024-10-15 04:41:44.362091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:54.952 [2024-10-15 04:41:44.362123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.739 ms 00:17:54.952 [2024-10-15 04:41:44.362137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.952 [2024-10-15 04:41:44.399307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.952 [2024-10-15 04:41:44.399358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:54.952 [2024-10-15 04:41:44.399379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.133 ms 00:17:54.952 [2024-10-15 04:41:44.399389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.952 [2024-10-15 04:41:44.400189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.952 [2024-10-15 04:41:44.400215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.952 [2024-10-15 04:41:44.400234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:17:54.952 [2024-10-15 04:41:44.400245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.211 [2024-10-15 04:41:44.500307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.211 [2024-10-15 04:41:44.500355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:55.211 [2024-10-15 04:41:44.500386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.146 ms 00:17:55.211 [2024-10-15 04:41:44.500397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.211 [2024-10-15 04:41:44.539539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.211 [2024-10-15 04:41:44.539585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:55.211 [2024-10-15 04:41:44.539603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.090 ms 00:17:55.211 [2024-10-15 04:41:44.539614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.211 [2024-10-15 04:41:44.578477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.211 [2024-10-15 04:41:44.578547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:55.211 [2024-10-15 04:41:44.578566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.862 ms 00:17:55.211 [2024-10-15 04:41:44.578577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.211 [2024-10-15 04:41:44.619100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.211 [2024-10-15 04:41:44.619164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:55.211 [2024-10-15 04:41:44.619184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.501 ms 00:17:55.211 [2024-10-15 04:41:44.619194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.211 [2024-10-15 04:41:44.619282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.211 [2024-10-15 04:41:44.619295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:55.211 [2024-10-15 04:41:44.619315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:55.211 [2024-10-15 04:41:44.619325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.211 [2024-10-15 04:41:44.619484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.211 [2024-10-15 04:41:44.619500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:55.211 [2024-10-15 04:41:44.619517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:55.211 [2024-10-15 04:41:44.619528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.211 [2024-10-15 04:41:44.620727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4580.345 ms, result 0 00:17:55.211 { 00:17:55.211 "name": "ftl0", 00:17:55.211 "uuid": "0072938c-daca-472e-b541-b52dbaeaf80e" 00:17:55.211 } 00:17:55.211 04:41:44 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:55.211 04:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:55.211 04:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:55.211 04:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:17:55.211 04:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:55.211 04:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:55.211 04:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:55.471 04:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:55.730 [ 00:17:55.730 { 00:17:55.730 "name": "ftl0", 00:17:55.730 "aliases": [ 00:17:55.730 "0072938c-daca-472e-b541-b52dbaeaf80e" 00:17:55.730 ], 00:17:55.730 "product_name": "FTL disk", 00:17:55.730 "block_size": 4096, 00:17:55.730 "num_blocks": 20971520, 00:17:55.730 "uuid": "0072938c-daca-472e-b541-b52dbaeaf80e", 00:17:55.730 "assigned_rate_limits": { 00:17:55.730 "rw_ios_per_sec": 0, 00:17:55.730 "rw_mbytes_per_sec": 0, 00:17:55.730 "r_mbytes_per_sec": 0, 00:17:55.730 "w_mbytes_per_sec": 0 00:17:55.730 }, 00:17:55.730 "claimed": false, 00:17:55.730 "zoned": false, 00:17:55.730 "supported_io_types": { 00:17:55.730 "read": true, 00:17:55.730 "write": true, 00:17:55.730 "unmap": true, 00:17:55.730 "flush": true, 00:17:55.730 "reset": false, 00:17:55.730 "nvme_admin": false, 00:17:55.730 "nvme_io": false, 00:17:55.730 "nvme_io_md": false, 00:17:55.730 "write_zeroes": true, 00:17:55.730 "zcopy": false, 00:17:55.730 "get_zone_info": false, 00:17:55.730 "zone_management": false, 00:17:55.730 "zone_append": false, 00:17:55.730 "compare": false, 00:17:55.730 "compare_and_write": false, 00:17:55.730 "abort": false, 00:17:55.730 "seek_hole": false, 00:17:55.730 "seek_data": false, 00:17:55.730 "copy": false, 00:17:55.730 "nvme_iov_md": false 00:17:55.730 }, 00:17:55.730 "driver_specific": { 00:17:55.730 "ftl": { 00:17:55.730 "base_bdev": "dc94ffd9-8701-4ed8-97f4-ce4ca4638550", 00:17:55.730 "cache": "nvc0n1p0" 00:17:55.730 } 00:17:55.730 } 00:17:55.730 } 00:17:55.730 ] 00:17:55.730 04:41:45 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:17:55.730 04:41:45 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:55.730 04:41:45 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:55.990 04:41:45 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:55.990 04:41:45 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:55.990 [2024-10-15 04:41:45.459851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.990 [2024-10-15 04:41:45.459910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:55.990 [2024-10-15 04:41:45.459926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:55.990 [2024-10-15 04:41:45.459940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.990 [2024-10-15 04:41:45.459975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:55.990 [2024-10-15 04:41:45.464275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.990 [2024-10-15 04:41:45.464312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:55.990 [2024-10-15 04:41:45.464328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:17:55.990 [2024-10-15 04:41:45.464338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.990 [2024-10-15 04:41:45.464769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.990 [2024-10-15 04:41:45.464787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:55.990 [2024-10-15 04:41:45.464801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:17:55.990 [2024-10-15 04:41:45.464811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.990 [2024-10-15 04:41:45.467345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.990 [2024-10-15 04:41:45.467385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:55.990 [2024-10-15 04:41:45.467407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.500 ms 00:17:55.990 [2024-10-15 04:41:45.467418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.990 [2024-10-15 04:41:45.472485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.990 [2024-10-15 04:41:45.472522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:55.990 [2024-10-15 04:41:45.472539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:17:55.990 [2024-10-15 04:41:45.472549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.250 [2024-10-15 04:41:45.509848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.250 [2024-10-15 04:41:45.510030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:56.250 [2024-10-15 04:41:45.510059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.260 ms 00:17:56.250 [2024-10-15 04:41:45.510070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.250 [2024-10-15 04:41:45.532635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.250 [2024-10-15 04:41:45.532680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:56.250 [2024-10-15 04:41:45.532697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.522 ms 00:17:56.250 [2024-10-15 04:41:45.532708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.250 [2024-10-15 04:41:45.532946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.250 [2024-10-15 04:41:45.532962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:56.250 [2024-10-15 04:41:45.532975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:17:56.250 [2024-10-15 04:41:45.532985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.250 [2024-10-15 04:41:45.571402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.250 [2024-10-15 04:41:45.571469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:56.250 [2024-10-15 04:41:45.571489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.441 ms 00:17:56.250 [2024-10-15 04:41:45.571499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.250 [2024-10-15 04:41:45.608500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.250 [2024-10-15 04:41:45.608556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:56.251 [2024-10-15 04:41:45.608575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.000 ms 00:17:56.251 [2024-10-15 04:41:45.608586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.251 [2024-10-15 04:41:45.645833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.251 [2024-10-15 04:41:45.645879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:56.251 [2024-10-15 04:41:45.645897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.236 ms 00:17:56.251 [2024-10-15 04:41:45.645907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.251 [2024-10-15 04:41:45.682688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.251 [2024-10-15 04:41:45.682754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:56.251 [2024-10-15 04:41:45.682773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.691 ms 00:17:56.251 [2024-10-15 04:41:45.682783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.251 [2024-10-15 04:41:45.683070] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:56.251 [2024-10-15 04:41:45.683102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.683996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:56.251 [2024-10-15 04:41:45.684199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:56.252 [2024-10-15 04:41:45.684399] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:56.252 [2024-10-15 04:41:45.684413] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0072938c-daca-472e-b541-b52dbaeaf80e 00:17:56.252 [2024-10-15 04:41:45.684424] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:56.252 [2024-10-15 04:41:45.684439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:56.252 [2024-10-15 04:41:45.684450] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:56.252 [2024-10-15 04:41:45.684463] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:56.252 [2024-10-15 04:41:45.684473] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:56.252 [2024-10-15 04:41:45.684489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:56.252 [2024-10-15 04:41:45.684500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:56.252 [2024-10-15 04:41:45.684512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:56.252 [2024-10-15 04:41:45.684521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:56.252 [2024-10-15 04:41:45.684534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.252 [2024-10-15 04:41:45.684546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:56.252 [2024-10-15 04:41:45.684559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:17:56.252 [2024-10-15 04:41:45.684569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.252 [2024-10-15 04:41:45.705346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.252 [2024-10-15 04:41:45.705499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:56.252 [2024-10-15 04:41:45.705625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.717 ms 00:17:56.252 [2024-10-15 04:41:45.705666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.252 [2024-10-15 04:41:45.706305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.252 [2024-10-15 04:41:45.706408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:56.252 [2024-10-15 04:41:45.706486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:17:56.252 [2024-10-15 04:41:45.706521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.512 [2024-10-15 04:41:45.778054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.512 [2024-10-15 04:41:45.778323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.512 [2024-10-15 04:41:45.778422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.512 [2024-10-15 04:41:45.778458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.512 [2024-10-15 04:41:45.778566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.512 [2024-10-15 04:41:45.778678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.512 [2024-10-15 04:41:45.778755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.512 [2024-10-15 04:41:45.778785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.512 [2024-10-15 04:41:45.778974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.512 [2024-10-15 04:41:45.779064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.512 [2024-10-15 04:41:45.779158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.512 [2024-10-15 04:41:45.779192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.512 [2024-10-15 04:41:45.779257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.512 [2024-10-15 04:41:45.779291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.512 [2024-10-15 04:41:45.779324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.512 [2024-10-15 04:41:45.779356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.512 [2024-10-15 04:41:45.913937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.512 [2024-10-15 04:41:45.914190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.512 [2024-10-15 04:41:45.914330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.512 [2024-10-15 04:41:45.914372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.512 [2024-10-15 04:41:46.018043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.512 [2024-10-15 04:41:46.018233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.772 [2024-10-15 04:41:46.018320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.772 [2024-10-15 04:41:46.018387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.772 [2024-10-15 04:41:46.018539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.772 [2024-10-15 04:41:46.018579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.772 [2024-10-15 04:41:46.018675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.772 [2024-10-15 04:41:46.018711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.772 [2024-10-15 04:41:46.018860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.772 [2024-10-15 04:41:46.018902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.772 [2024-10-15 04:41:46.018982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.772 [2024-10-15 04:41:46.019017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.772 [2024-10-15 04:41:46.019170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.772 [2024-10-15 04:41:46.019268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.772 [2024-10-15 04:41:46.019348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.772 [2024-10-15 04:41:46.019383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.772 [2024-10-15 04:41:46.019525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.772 [2024-10-15 04:41:46.019631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:56.772 [2024-10-15 04:41:46.019709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.772 [2024-10-15 04:41:46.019794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.772 [2024-10-15 04:41:46.019891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.772 [2024-10-15 04:41:46.019927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.772 [2024-10-15 04:41:46.019999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.772 [2024-10-15 04:41:46.020067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.772 [2024-10-15 04:41:46.020162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.772 [2024-10-15 04:41:46.020197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.772 [2024-10-15 04:41:46.020270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.772 [2024-10-15 04:41:46.020304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.772 [2024-10-15 04:41:46.020536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 561.582 ms, result 0 00:17:56.772 true 00:17:56.772 04:41:46 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74432 00:17:56.772 04:41:46 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 74432 ']' 00:17:56.772 04:41:46 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 74432 00:17:56.772 04:41:46 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:17:56.772 04:41:46 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:56.772 04:41:46 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74432 00:17:56.772 killing process with pid 74432 00:17:56.772 04:41:46 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:56.772 04:41:46 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:56.772 04:41:46 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74432' 00:17:56.772 04:41:46 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 74432 00:17:56.772 04:41:46 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 74432 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:02.045 04:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:02.045 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:02.045 fio-3.35 00:18:02.045 Starting 1 thread 00:18:07.370 00:18:07.370 test: (groupid=0, jobs=1): err= 0: pid=74656: Tue Oct 15 04:41:56 2024 00:18:07.370 read: IOPS=984, BW=65.3MiB/s (68.5MB/s)(255MiB/3895msec) 00:18:07.370 slat (nsec): min=4272, max=33396, avg=6363.40, stdev=3067.14 00:18:07.370 clat (usec): min=307, max=1042, avg=456.66, stdev=73.00 00:18:07.370 lat (usec): min=312, max=1049, avg=463.02, stdev=73.38 00:18:07.370 clat percentiles (usec): 00:18:07.370 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 383], 20.00th=[ 400], 00:18:07.370 | 30.00th=[ 412], 40.00th=[ 424], 50.00th=[ 457], 60.00th=[ 474], 00:18:07.370 | 70.00th=[ 486], 80.00th=[ 510], 90.00th=[ 545], 95.00th=[ 570], 00:18:07.370 | 99.00th=[ 676], 99.50th=[ 734], 99.90th=[ 914], 99.95th=[ 1029], 00:18:07.370 | 99.99th=[ 1045] 00:18:07.370 write: IOPS=991, BW=65.8MiB/s (69.0MB/s)(256MiB/3891msec); 0 zone resets 00:18:07.370 slat (nsec): min=15175, max=87138, avg=20776.87, stdev=6208.78 00:18:07.370 clat (usec): min=348, max=1096, avg=517.62, stdev=81.79 00:18:07.370 lat (usec): min=366, max=1113, avg=538.39, stdev=81.94 00:18:07.370 clat percentiles (usec): 00:18:07.370 | 1.00th=[ 400], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 437], 00:18:07.370 | 30.00th=[ 474], 40.00th=[ 494], 50.00th=[ 506], 60.00th=[ 529], 00:18:07.370 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 627], 95.00th=[ 652], 00:18:07.370 | 99.00th=[ 791], 99.50th=[ 840], 99.90th=[ 955], 99.95th=[ 1020], 00:18:07.370 | 99.99th=[ 1090] 00:18:07.370 bw ( KiB/s): min=63512, max=70040, per=100.00%, avg=67494.86, stdev=2536.70, samples=7 00:18:07.370 iops : min= 934, max= 1030, avg=992.57, stdev=37.30, samples=7 00:18:07.370 lat (usec) : 500=61.56%, 750=37.53%, 1000=0.86% 00:18:07.370 lat (msec) : 2=0.05% 00:18:07.370 cpu : usr=99.10%, sys=0.15%, ctx=6, majf=0, minf=1169 00:18:07.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.370 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.370 00:18:07.370 Run status group 0 (all jobs): 00:18:07.370 READ: bw=65.3MiB/s (68.5MB/s), 65.3MiB/s-65.3MiB/s (68.5MB/s-68.5MB/s), io=255MiB (267MB), run=3895-3895msec 00:18:07.370 WRITE: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=256MiB (269MB), run=3891-3891msec 00:18:08.745 ----------------------------------------------------- 00:18:08.745 Suppressions used: 00:18:08.745 count bytes template 00:18:08.745 1 5 /usr/src/fio/parse.c 00:18:08.745 1 8 libtcmalloc_minimal.so 00:18:08.745 1 904 libcrypto.so 00:18:08.745 ----------------------------------------------------- 00:18:08.745 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:09.004 04:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:09.263 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:09.263 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:09.263 fio-3.35 00:18:09.263 Starting 2 threads 00:18:35.825 00:18:35.825 first_half: (groupid=0, jobs=1): err= 0: pid=74760: Tue Oct 15 04:42:24 2024 00:18:35.825 read: IOPS=2678, BW=10.5MiB/s (11.0MB/s)(255MiB/24384msec) 00:18:35.825 slat (nsec): min=3413, max=30528, avg=6165.84, stdev=1990.70 00:18:35.825 clat (usec): min=837, max=312467, avg=37963.59, stdev=19707.80 00:18:35.825 lat (usec): min=845, max=312472, avg=37969.76, stdev=19708.01 00:18:35.825 clat percentiles (msec): 00:18:35.825 | 1.00th=[ 13], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:18:35.825 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:18:35.825 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 60], 00:18:35.825 | 99.00th=[ 155], 99.50th=[ 174], 99.90th=[ 211], 99.95th=[ 243], 00:18:35.825 | 99.99th=[ 305] 00:18:35.825 write: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(256MiB/21655msec); 0 zone resets 00:18:35.825 slat (usec): min=4, max=676, avg= 7.67, stdev= 5.74 00:18:35.825 clat (usec): min=433, max=96789, avg=9766.54, stdev=16367.01 00:18:35.825 lat (usec): min=442, max=96795, avg=9774.21, stdev=16367.10 00:18:35.825 clat percentiles (usec): 00:18:35.825 | 1.00th=[ 1045], 5.00th=[ 1385], 10.00th=[ 1631], 20.00th=[ 2008], 00:18:35.825 | 30.00th=[ 3294], 40.00th=[ 4817], 50.00th=[ 5407], 60.00th=[ 6390], 00:18:35.825 | 70.00th=[ 7177], 80.00th=[10552], 90.00th=[13304], 95.00th=[45876], 00:18:35.825 | 99.00th=[81265], 99.50th=[85459], 99.90th=[93848], 99.95th=[94897], 00:18:35.825 | 99.99th=[95945] 00:18:35.825 bw ( KiB/s): min= 2496, max=42880, per=96.63%, avg=22792.96, stdev=13706.35, samples=23 00:18:35.825 iops : min= 624, max=10720, avg=5698.22, stdev=3426.59, samples=23 00:18:35.825 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.30% 00:18:35.825 lat (msec) : 2=9.71%, 4=7.69%, 10=21.89%, 20=7.49%, 50=46.84% 00:18:35.825 lat (msec) : 100=4.85%, 250=1.13%, 500=0.02% 00:18:35.825 cpu : usr=99.26%, sys=0.14%, ctx=40, majf=0, minf=5567 00:18:35.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:35.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.825 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.825 issued rwts: total=65310,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.825 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.825 second_half: (groupid=0, jobs=1): err= 0: pid=74761: Tue Oct 15 04:42:24 2024 00:18:35.825 read: IOPS=2660, BW=10.4MiB/s (10.9MB/s)(255MiB/24550msec) 00:18:35.825 slat (nsec): min=3459, max=32460, avg=6097.53, stdev=1989.04 00:18:35.825 clat (usec): min=1142, max=318155, avg=37225.95, stdev=20854.28 00:18:35.825 lat (usec): min=1148, max=318161, avg=37232.05, stdev=20854.54 00:18:35.825 clat percentiles (msec): 00:18:35.825 | 1.00th=[ 9], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 33], 00:18:35.825 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:18:35.825 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 40], 95.00th=[ 54], 00:18:35.825 | 99.00th=[ 153], 99.50th=[ 169], 99.90th=[ 207], 99.95th=[ 226], 00:18:35.825 | 99.99th=[ 313] 00:18:35.825 write: IOPS=2948, BW=11.5MiB/s (12.1MB/s)(256MiB/22229msec); 0 zone resets 00:18:35.825 slat (usec): min=4, max=356, avg= 7.83, stdev= 4.31 00:18:35.825 clat (usec): min=398, max=97520, avg=10824.01, stdev=17620.02 00:18:35.825 lat (usec): min=403, max=97526, avg=10831.84, stdev=17620.15 00:18:35.825 clat percentiles (usec): 00:18:35.825 | 1.00th=[ 988], 5.00th=[ 1270], 10.00th=[ 1483], 20.00th=[ 1778], 00:18:35.825 | 30.00th=[ 2212], 40.00th=[ 3916], 50.00th=[ 5407], 60.00th=[ 6521], 00:18:35.825 | 70.00th=[ 8029], 80.00th=[11600], 90.00th=[27919], 95.00th=[56361], 00:18:35.825 | 99.00th=[82314], 99.50th=[85459], 99.90th=[95945], 99.95th=[95945], 00:18:35.825 | 99.99th=[96994] 00:18:35.825 bw ( KiB/s): min= 1064, max=53912, per=92.62%, avg=21844.33, stdev=13823.45, samples=24 00:18:35.825 iops : min= 266, max=13478, avg=5461.04, stdev=3455.85, samples=24 00:18:35.826 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.49% 00:18:35.826 lat (msec) : 2=12.83%, 4=7.07%, 10=18.78%, 20=7.01%, 50=48.46% 00:18:35.826 lat (msec) : 100=3.81%, 250=1.47%, 500=0.02% 00:18:35.826 cpu : usr=99.27%, sys=0.19%, ctx=39, majf=0, minf=5546 00:18:35.826 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:35.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.826 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.826 issued rwts: total=65320,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.826 00:18:35.826 Run status group 0 (all jobs): 00:18:35.826 READ: bw=20.8MiB/s (21.8MB/s), 10.4MiB/s-10.5MiB/s (10.9MB/s-11.0MB/s), io=510MiB (535MB), run=24384-24550msec 00:18:35.826 WRITE: bw=23.0MiB/s (24.2MB/s), 11.5MiB/s-11.8MiB/s (12.1MB/s-12.4MB/s), io=512MiB (537MB), run=21655-22229msec 00:18:37.732 ----------------------------------------------------- 00:18:37.732 Suppressions used: 00:18:37.732 count bytes template 00:18:37.732 2 10 /usr/src/fio/parse.c 00:18:37.732 4 384 /usr/src/fio/iolog.c 00:18:37.732 1 8 libtcmalloc_minimal.so 00:18:37.732 1 904 libcrypto.so 00:18:37.732 ----------------------------------------------------- 00:18:37.732 00:18:37.732 04:42:27 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:37.732 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.732 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:37.733 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:37.992 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:37.992 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:37.992 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:37.992 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:37.992 04:42:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:37.992 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:37.992 fio-3.35 00:18:37.992 Starting 1 thread 00:18:56.081 00:18:56.081 test: (groupid=0, jobs=1): err= 0: pid=75085: Tue Oct 15 04:42:42 2024 00:18:56.081 read: IOPS=7461, BW=29.1MiB/s (30.6MB/s)(255MiB/8739msec) 00:18:56.081 slat (usec): min=3, max=124, avg= 5.51, stdev= 1.70 00:18:56.081 clat (usec): min=658, max=91638, avg=17146.60, stdev=2053.04 00:18:56.081 lat (usec): min=662, max=91644, avg=17152.11, stdev=2053.06 00:18:56.081 clat percentiles (usec): 00:18:56.081 | 1.00th=[15533], 5.00th=[15664], 10.00th=[15795], 20.00th=[16057], 00:18:56.081 | 30.00th=[16188], 40.00th=[16712], 50.00th=[17171], 60.00th=[17433], 00:18:56.081 | 70.00th=[17695], 80.00th=[17695], 90.00th=[17957], 95.00th=[18482], 00:18:56.081 | 99.00th=[25297], 99.50th=[28181], 99.90th=[46400], 99.95th=[52167], 00:18:56.081 | 99.99th=[54789] 00:18:56.081 write: IOPS=13.0k, BW=51.0MiB/s (53.4MB/s)(256MiB/5024msec); 0 zone resets 00:18:56.081 slat (usec): min=4, max=679, avg= 7.87, stdev= 7.63 00:18:56.081 clat (usec): min=611, max=57031, avg=9765.20, stdev=11927.99 00:18:56.081 lat (usec): min=620, max=57038, avg=9773.07, stdev=11927.98 00:18:56.081 clat percentiles (usec): 00:18:56.081 | 1.00th=[ 979], 5.00th=[ 1188], 10.00th=[ 1336], 20.00th=[ 1516], 00:18:56.081 | 30.00th=[ 1696], 40.00th=[ 2114], 50.00th=[ 6325], 60.00th=[ 7308], 00:18:56.081 | 70.00th=[ 8455], 80.00th=[10945], 90.00th=[34341], 95.00th=[36439], 00:18:56.081 | 99.00th=[43779], 99.50th=[47973], 99.90th=[53216], 99.95th=[53740], 00:18:56.081 | 99.99th=[55837] 00:18:56.081 bw ( KiB/s): min= 1064, max=70752, per=91.34%, avg=47662.55, stdev=17617.51, samples=11 00:18:56.081 iops : min= 266, max=17688, avg=11915.64, stdev=4404.38, samples=11 00:18:56.081 lat (usec) : 750=0.01%, 1000=0.63% 00:18:56.081 lat (msec) : 2=18.89%, 4=1.58%, 10=17.46%, 20=52.46%, 50=8.74% 00:18:56.081 lat (msec) : 100=0.24% 00:18:56.081 cpu : usr=98.97%, sys=0.26%, ctx=25, majf=0, minf=5566 00:18:56.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.081 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.081 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.081 00:18:56.081 Run status group 0 (all jobs): 00:18:56.081 READ: bw=29.1MiB/s (30.6MB/s), 29.1MiB/s-29.1MiB/s (30.6MB/s-30.6MB/s), io=255MiB (267MB), run=8739-8739msec 00:18:56.081 WRITE: bw=51.0MiB/s (53.4MB/s), 51.0MiB/s-51.0MiB/s (53.4MB/s-53.4MB/s), io=256MiB (268MB), run=5024-5024msec 00:18:56.081 ----------------------------------------------------- 00:18:56.081 Suppressions used: 00:18:56.081 count bytes template 00:18:56.081 1 5 /usr/src/fio/parse.c 00:18:56.081 2 192 /usr/src/fio/iolog.c 00:18:56.081 1 8 libtcmalloc_minimal.so 00:18:56.081 1 904 libcrypto.so 00:18:56.081 ----------------------------------------------------- 00:18:56.081 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:56.081 Remove shared memory files 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58070 /dev/shm/spdk_tgt_trace.pid73316 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:56.081 00:18:56.081 real 1m9.367s 00:18:56.081 user 2m30.051s 00:18:56.081 sys 0m3.968s 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:56.081 04:42:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:56.081 ************************************ 00:18:56.081 END TEST ftl_fio_basic 00:18:56.081 ************************************ 00:18:56.081 04:42:44 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:56.081 04:42:44 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:56.081 04:42:44 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:56.081 04:42:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:56.081 ************************************ 00:18:56.081 START TEST ftl_bdevperf 00:18:56.081 ************************************ 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:56.081 * Looking for test storage... 00:18:56.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.081 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:56.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.081 --rc genhtml_branch_coverage=1 00:18:56.082 --rc genhtml_function_coverage=1 00:18:56.082 --rc genhtml_legend=1 00:18:56.082 --rc geninfo_all_blocks=1 00:18:56.082 --rc geninfo_unexecuted_blocks=1 00:18:56.082 00:18:56.082 ' 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:56.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.082 --rc genhtml_branch_coverage=1 00:18:56.082 --rc genhtml_function_coverage=1 00:18:56.082 --rc genhtml_legend=1 00:18:56.082 --rc geninfo_all_blocks=1 00:18:56.082 --rc geninfo_unexecuted_blocks=1 00:18:56.082 00:18:56.082 ' 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:56.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.082 --rc genhtml_branch_coverage=1 00:18:56.082 --rc genhtml_function_coverage=1 00:18:56.082 --rc genhtml_legend=1 00:18:56.082 --rc geninfo_all_blocks=1 00:18:56.082 --rc geninfo_unexecuted_blocks=1 00:18:56.082 00:18:56.082 ' 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:56.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.082 --rc genhtml_branch_coverage=1 00:18:56.082 --rc genhtml_function_coverage=1 00:18:56.082 --rc genhtml_legend=1 00:18:56.082 --rc geninfo_all_blocks=1 00:18:56.082 --rc geninfo_unexecuted_blocks=1 00:18:56.082 00:18:56.082 ' 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75329 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75329 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 75329 ']' 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.082 04:42:45 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:56.082 [2024-10-15 04:42:45.388000] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:18:56.082 [2024-10-15 04:42:45.388119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75329 ] 00:18:56.082 [2024-10-15 04:42:45.560540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.341 [2024-10-15 04:42:45.682939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.966 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.966 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:18:56.966 04:42:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:56.966 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:56.966 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:56.966 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:56.966 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:56.966 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:57.226 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:57.226 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:57.226 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:57.226 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:57.226 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:57.226 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:57.226 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:57.226 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:57.485 { 00:18:57.485 "name": "nvme0n1", 00:18:57.485 "aliases": [ 00:18:57.485 "b4b4b588-81db-4d15-bb15-ceabf98e2be8" 00:18:57.485 ], 00:18:57.485 "product_name": "NVMe disk", 00:18:57.485 "block_size": 4096, 00:18:57.485 "num_blocks": 1310720, 00:18:57.485 "uuid": "b4b4b588-81db-4d15-bb15-ceabf98e2be8", 00:18:57.485 "numa_id": -1, 00:18:57.485 "assigned_rate_limits": { 00:18:57.485 "rw_ios_per_sec": 0, 00:18:57.485 "rw_mbytes_per_sec": 0, 00:18:57.485 "r_mbytes_per_sec": 0, 00:18:57.485 "w_mbytes_per_sec": 0 00:18:57.485 }, 00:18:57.485 "claimed": true, 00:18:57.485 "claim_type": "read_many_write_one", 00:18:57.485 "zoned": false, 00:18:57.485 "supported_io_types": { 00:18:57.485 "read": true, 00:18:57.485 "write": true, 00:18:57.485 "unmap": true, 00:18:57.485 "flush": true, 00:18:57.485 "reset": true, 00:18:57.485 "nvme_admin": true, 00:18:57.485 "nvme_io": true, 00:18:57.485 "nvme_io_md": false, 00:18:57.485 "write_zeroes": true, 00:18:57.485 "zcopy": false, 00:18:57.485 "get_zone_info": false, 00:18:57.485 "zone_management": false, 00:18:57.485 "zone_append": false, 00:18:57.485 "compare": true, 00:18:57.485 "compare_and_write": false, 00:18:57.485 "abort": true, 00:18:57.485 "seek_hole": false, 00:18:57.485 "seek_data": false, 00:18:57.485 "copy": true, 00:18:57.485 "nvme_iov_md": false 00:18:57.485 }, 00:18:57.485 "driver_specific": { 00:18:57.485 "nvme": [ 00:18:57.485 { 00:18:57.485 "pci_address": "0000:00:11.0", 00:18:57.485 "trid": { 00:18:57.485 "trtype": "PCIe", 00:18:57.485 "traddr": "0000:00:11.0" 00:18:57.485 }, 00:18:57.485 "ctrlr_data": { 00:18:57.485 "cntlid": 0, 00:18:57.485 "vendor_id": "0x1b36", 00:18:57.485 "model_number": "QEMU NVMe Ctrl", 00:18:57.485 "serial_number": "12341", 00:18:57.485 "firmware_revision": "8.0.0", 00:18:57.485 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:57.485 "oacs": { 00:18:57.485 "security": 0, 00:18:57.485 "format": 1, 00:18:57.485 "firmware": 0, 00:18:57.485 "ns_manage": 1 00:18:57.485 }, 00:18:57.485 "multi_ctrlr": false, 00:18:57.485 "ana_reporting": false 00:18:57.485 }, 00:18:57.485 "vs": { 00:18:57.485 "nvme_version": "1.4" 00:18:57.485 }, 00:18:57.485 "ns_data": { 00:18:57.485 "id": 1, 00:18:57.485 "can_share": false 00:18:57.485 } 00:18:57.485 } 00:18:57.485 ], 00:18:57.485 "mp_policy": "active_passive" 00:18:57.485 } 00:18:57.485 } 00:18:57.485 ]' 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:57.485 04:42:46 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:57.744 04:42:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=d9942d40-6b5e-4544-97c0-a9fa6b732a30 00:18:57.744 04:42:47 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:57.744 04:42:47 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d9942d40-6b5e-4544-97c0-a9fa6b732a30 00:18:58.003 04:42:47 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:58.262 04:42:47 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=d862c6c3-f3e6-433a-b492-b1002f8124d0 00:18:58.262 04:42:47 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d862c6c3-f3e6-433a-b492-b1002f8124d0 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:58.522 04:42:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:58.522 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:58.522 { 00:18:58.522 "name": "ebc876b4-f7a4-4de1-806b-ec3766b4f578", 00:18:58.522 "aliases": [ 00:18:58.522 "lvs/nvme0n1p0" 00:18:58.522 ], 00:18:58.522 "product_name": "Logical Volume", 00:18:58.522 "block_size": 4096, 00:18:58.522 "num_blocks": 26476544, 00:18:58.522 "uuid": "ebc876b4-f7a4-4de1-806b-ec3766b4f578", 00:18:58.522 "assigned_rate_limits": { 00:18:58.522 "rw_ios_per_sec": 0, 00:18:58.522 "rw_mbytes_per_sec": 0, 00:18:58.522 "r_mbytes_per_sec": 0, 00:18:58.522 "w_mbytes_per_sec": 0 00:18:58.522 }, 00:18:58.522 "claimed": false, 00:18:58.522 "zoned": false, 00:18:58.522 "supported_io_types": { 00:18:58.522 "read": true, 00:18:58.522 "write": true, 00:18:58.522 "unmap": true, 00:18:58.522 "flush": false, 00:18:58.522 "reset": true, 00:18:58.522 "nvme_admin": false, 00:18:58.522 "nvme_io": false, 00:18:58.522 "nvme_io_md": false, 00:18:58.522 "write_zeroes": true, 00:18:58.522 "zcopy": false, 00:18:58.522 "get_zone_info": false, 00:18:58.522 "zone_management": false, 00:18:58.522 "zone_append": false, 00:18:58.522 "compare": false, 00:18:58.522 "compare_and_write": false, 00:18:58.522 "abort": false, 00:18:58.522 "seek_hole": true, 00:18:58.522 "seek_data": true, 00:18:58.522 "copy": false, 00:18:58.522 "nvme_iov_md": false 00:18:58.522 }, 00:18:58.522 "driver_specific": { 00:18:58.522 "lvol": { 00:18:58.522 "lvol_store_uuid": "d862c6c3-f3e6-433a-b492-b1002f8124d0", 00:18:58.522 "base_bdev": "nvme0n1", 00:18:58.522 "thin_provision": true, 00:18:58.522 "num_allocated_clusters": 0, 00:18:58.522 "snapshot": false, 00:18:58.522 "clone": false, 00:18:58.522 "esnap_clone": false 00:18:58.522 } 00:18:58.522 } 00:18:58.522 } 00:18:58.522 ]' 00:18:58.522 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:58.781 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:58.781 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:58.781 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:58.781 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:58.781 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:58.781 04:42:48 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:58.781 04:42:48 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:58.781 04:42:48 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:59.040 04:42:48 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:59.040 04:42:48 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:59.040 04:42:48 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:59.040 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:59.040 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:59.040 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:59.040 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:59.040 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:59.299 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:59.299 { 00:18:59.299 "name": "ebc876b4-f7a4-4de1-806b-ec3766b4f578", 00:18:59.299 "aliases": [ 00:18:59.299 "lvs/nvme0n1p0" 00:18:59.299 ], 00:18:59.299 "product_name": "Logical Volume", 00:18:59.299 "block_size": 4096, 00:18:59.299 "num_blocks": 26476544, 00:18:59.299 "uuid": "ebc876b4-f7a4-4de1-806b-ec3766b4f578", 00:18:59.299 "assigned_rate_limits": { 00:18:59.299 "rw_ios_per_sec": 0, 00:18:59.299 "rw_mbytes_per_sec": 0, 00:18:59.299 "r_mbytes_per_sec": 0, 00:18:59.299 "w_mbytes_per_sec": 0 00:18:59.299 }, 00:18:59.299 "claimed": false, 00:18:59.299 "zoned": false, 00:18:59.299 "supported_io_types": { 00:18:59.299 "read": true, 00:18:59.299 "write": true, 00:18:59.299 "unmap": true, 00:18:59.299 "flush": false, 00:18:59.299 "reset": true, 00:18:59.299 "nvme_admin": false, 00:18:59.299 "nvme_io": false, 00:18:59.299 "nvme_io_md": false, 00:18:59.299 "write_zeroes": true, 00:18:59.299 "zcopy": false, 00:18:59.299 "get_zone_info": false, 00:18:59.299 "zone_management": false, 00:18:59.299 "zone_append": false, 00:18:59.299 "compare": false, 00:18:59.299 "compare_and_write": false, 00:18:59.299 "abort": false, 00:18:59.299 "seek_hole": true, 00:18:59.299 "seek_data": true, 00:18:59.299 "copy": false, 00:18:59.299 "nvme_iov_md": false 00:18:59.299 }, 00:18:59.299 "driver_specific": { 00:18:59.299 "lvol": { 00:18:59.299 "lvol_store_uuid": "d862c6c3-f3e6-433a-b492-b1002f8124d0", 00:18:59.299 "base_bdev": "nvme0n1", 00:18:59.299 "thin_provision": true, 00:18:59.300 "num_allocated_clusters": 0, 00:18:59.300 "snapshot": false, 00:18:59.300 "clone": false, 00:18:59.300 "esnap_clone": false 00:18:59.300 } 00:18:59.300 } 00:18:59.300 } 00:18:59.300 ]' 00:18:59.300 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:59.300 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:59.300 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:59.300 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:59.300 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:59.300 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:59.300 04:42:48 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:59.300 04:42:48 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:59.559 04:42:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:59.559 04:42:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:59.559 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:59.559 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:59.559 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:59.559 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:59.559 04:42:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ebc876b4-f7a4-4de1-806b-ec3766b4f578 00:18:59.819 04:42:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:59.819 { 00:18:59.819 "name": "ebc876b4-f7a4-4de1-806b-ec3766b4f578", 00:18:59.819 "aliases": [ 00:18:59.819 "lvs/nvme0n1p0" 00:18:59.819 ], 00:18:59.819 "product_name": "Logical Volume", 00:18:59.819 "block_size": 4096, 00:18:59.819 "num_blocks": 26476544, 00:18:59.819 "uuid": "ebc876b4-f7a4-4de1-806b-ec3766b4f578", 00:18:59.819 "assigned_rate_limits": { 00:18:59.819 "rw_ios_per_sec": 0, 00:18:59.819 "rw_mbytes_per_sec": 0, 00:18:59.819 "r_mbytes_per_sec": 0, 00:18:59.819 "w_mbytes_per_sec": 0 00:18:59.819 }, 00:18:59.819 "claimed": false, 00:18:59.819 "zoned": false, 00:18:59.819 "supported_io_types": { 00:18:59.819 "read": true, 00:18:59.819 "write": true, 00:18:59.819 "unmap": true, 00:18:59.819 "flush": false, 00:18:59.819 "reset": true, 00:18:59.819 "nvme_admin": false, 00:18:59.819 "nvme_io": false, 00:18:59.819 "nvme_io_md": false, 00:18:59.819 "write_zeroes": true, 00:18:59.819 "zcopy": false, 00:18:59.819 "get_zone_info": false, 00:18:59.819 "zone_management": false, 00:18:59.819 "zone_append": false, 00:18:59.819 "compare": false, 00:18:59.819 "compare_and_write": false, 00:18:59.819 "abort": false, 00:18:59.819 "seek_hole": true, 00:18:59.819 "seek_data": true, 00:18:59.819 "copy": false, 00:18:59.819 "nvme_iov_md": false 00:18:59.819 }, 00:18:59.819 "driver_specific": { 00:18:59.819 "lvol": { 00:18:59.819 "lvol_store_uuid": "d862c6c3-f3e6-433a-b492-b1002f8124d0", 00:18:59.819 "base_bdev": "nvme0n1", 00:18:59.819 "thin_provision": true, 00:18:59.819 "num_allocated_clusters": 0, 00:18:59.819 "snapshot": false, 00:18:59.819 "clone": false, 00:18:59.819 "esnap_clone": false 00:18:59.819 } 00:18:59.819 } 00:18:59.819 } 00:18:59.819 ]' 00:18:59.819 04:42:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:59.819 04:42:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:59.819 04:42:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:59.819 04:42:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:59.819 04:42:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:59.819 04:42:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:59.819 04:42:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:59.819 04:42:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ebc876b4-f7a4-4de1-806b-ec3766b4f578 -c nvc0n1p0 --l2p_dram_limit 20 00:19:00.080 [2024-10-15 04:42:49.433519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.080 [2024-10-15 04:42:49.433762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:00.080 [2024-10-15 04:42:49.433791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:00.080 [2024-10-15 04:42:49.433805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.080 [2024-10-15 04:42:49.433913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.080 [2024-10-15 04:42:49.433930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:00.080 [2024-10-15 04:42:49.433941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:00.080 [2024-10-15 04:42:49.433958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.080 [2024-10-15 04:42:49.433978] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:00.080 [2024-10-15 04:42:49.434966] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:00.080 [2024-10-15 04:42:49.434991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.080 [2024-10-15 04:42:49.435010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:00.080 [2024-10-15 04:42:49.435021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:19:00.080 [2024-10-15 04:42:49.435033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.080 [2024-10-15 04:42:49.435109] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2742c012-cf6c-435c-90c7-91fc4b266dba 00:19:00.080 [2024-10-15 04:42:49.436623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.080 [2024-10-15 04:42:49.436657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:00.080 [2024-10-15 04:42:49.436672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:00.080 [2024-10-15 04:42:49.436688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.080 [2024-10-15 04:42:49.444511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.080 [2024-10-15 04:42:49.444665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:00.080 [2024-10-15 04:42:49.444694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.779 ms 00:19:00.080 [2024-10-15 04:42:49.444705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.080 [2024-10-15 04:42:49.444827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.080 [2024-10-15 04:42:49.444857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:00.080 [2024-10-15 04:42:49.444881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:00.080 [2024-10-15 04:42:49.444892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.080 [2024-10-15 04:42:49.444967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.080 [2024-10-15 04:42:49.444979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:00.080 [2024-10-15 04:42:49.444992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:00.080 [2024-10-15 04:42:49.445003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.080 [2024-10-15 04:42:49.445028] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:00.080 [2024-10-15 04:42:49.450583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.080 [2024-10-15 04:42:49.450622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:00.080 [2024-10-15 04:42:49.450635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.573 ms 00:19:00.080 [2024-10-15 04:42:49.450649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.080 [2024-10-15 04:42:49.450681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.080 [2024-10-15 04:42:49.450698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:00.080 [2024-10-15 04:42:49.450709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:00.080 [2024-10-15 04:42:49.450722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.080 [2024-10-15 04:42:49.450764] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:00.080 [2024-10-15 04:42:49.450901] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:00.080 [2024-10-15 04:42:49.450919] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:00.080 [2024-10-15 04:42:49.450936] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:00.080 [2024-10-15 04:42:49.450949] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:00.080 [2024-10-15 04:42:49.450963] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:00.080 [2024-10-15 04:42:49.450974] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:00.080 [2024-10-15 04:42:49.450987] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:00.080 [2024-10-15 04:42:49.450997] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:00.080 [2024-10-15 04:42:49.451009] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:00.080 [2024-10-15 04:42:49.451020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.080 [2024-10-15 04:42:49.451032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:00.080 [2024-10-15 04:42:49.451043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:19:00.080 [2024-10-15 04:42:49.451058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.080 [2024-10-15 04:42:49.451127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.080 [2024-10-15 04:42:49.451159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:00.080 [2024-10-15 04:42:49.451170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:00.080 [2024-10-15 04:42:49.451186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.080 [2024-10-15 04:42:49.451269] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:00.080 [2024-10-15 04:42:49.451283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:00.080 [2024-10-15 04:42:49.451295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:00.080 [2024-10-15 04:42:49.451309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.080 [2024-10-15 04:42:49.451323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:00.080 [2024-10-15 04:42:49.451335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:00.080 [2024-10-15 04:42:49.451345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:00.080 [2024-10-15 04:42:49.451358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:00.080 [2024-10-15 04:42:49.451368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:00.080 [2024-10-15 04:42:49.451380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:00.080 [2024-10-15 04:42:49.451389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:00.080 [2024-10-15 04:42:49.451402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:00.080 [2024-10-15 04:42:49.451411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:00.080 [2024-10-15 04:42:49.451437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:00.081 [2024-10-15 04:42:49.451447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:00.081 [2024-10-15 04:42:49.451463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.081 [2024-10-15 04:42:49.451473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:00.081 [2024-10-15 04:42:49.451485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:00.081 [2024-10-15 04:42:49.451494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.081 [2024-10-15 04:42:49.451509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:00.081 [2024-10-15 04:42:49.451519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:00.081 [2024-10-15 04:42:49.451543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.081 [2024-10-15 04:42:49.451552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:00.081 [2024-10-15 04:42:49.451580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:00.081 [2024-10-15 04:42:49.451589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.081 [2024-10-15 04:42:49.451601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:00.081 [2024-10-15 04:42:49.451611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:00.081 [2024-10-15 04:42:49.451635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.081 [2024-10-15 04:42:49.451644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:00.081 [2024-10-15 04:42:49.451656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:00.081 [2024-10-15 04:42:49.451664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.081 [2024-10-15 04:42:49.451678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:00.081 [2024-10-15 04:42:49.451687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:00.081 [2024-10-15 04:42:49.451698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:00.081 [2024-10-15 04:42:49.451707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:00.081 [2024-10-15 04:42:49.451719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:00.081 [2024-10-15 04:42:49.451728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:00.081 [2024-10-15 04:42:49.451740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:00.081 [2024-10-15 04:42:49.451749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:00.081 [2024-10-15 04:42:49.451760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.081 [2024-10-15 04:42:49.451769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:00.081 [2024-10-15 04:42:49.451780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:00.081 [2024-10-15 04:42:49.451789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.081 [2024-10-15 04:42:49.451800] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:00.081 [2024-10-15 04:42:49.451810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:00.081 [2024-10-15 04:42:49.451823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:00.081 [2024-10-15 04:42:49.451832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.081 [2024-10-15 04:42:49.451849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:00.081 [2024-10-15 04:42:49.451858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:00.081 [2024-10-15 04:42:49.451879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:00.081 [2024-10-15 04:42:49.451890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:00.081 [2024-10-15 04:42:49.451901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:00.081 [2024-10-15 04:42:49.451911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:00.081 [2024-10-15 04:42:49.451926] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:00.081 [2024-10-15 04:42:49.451938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:00.081 [2024-10-15 04:42:49.451952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:00.081 [2024-10-15 04:42:49.451962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:00.081 [2024-10-15 04:42:49.451975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:00.081 [2024-10-15 04:42:49.451985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:00.081 [2024-10-15 04:42:49.451998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:00.081 [2024-10-15 04:42:49.452008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:00.081 [2024-10-15 04:42:49.452021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:00.081 [2024-10-15 04:42:49.452032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:00.081 [2024-10-15 04:42:49.452047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:00.081 [2024-10-15 04:42:49.452057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:00.081 [2024-10-15 04:42:49.452069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:00.081 [2024-10-15 04:42:49.452080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:00.081 [2024-10-15 04:42:49.452092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:00.081 [2024-10-15 04:42:49.452102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:00.081 [2024-10-15 04:42:49.452115] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:00.081 [2024-10-15 04:42:49.452127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:00.081 [2024-10-15 04:42:49.452142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:00.081 [2024-10-15 04:42:49.452152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:00.081 [2024-10-15 04:42:49.452165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:00.081 [2024-10-15 04:42:49.452175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:00.081 [2024-10-15 04:42:49.452188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.081 [2024-10-15 04:42:49.452199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:00.081 [2024-10-15 04:42:49.452211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:19:00.081 [2024-10-15 04:42:49.452224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.081 [2024-10-15 04:42:49.452264] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:00.081 [2024-10-15 04:42:49.452276] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:03.400 [2024-10-15 04:42:52.611842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.400 [2024-10-15 04:42:52.611912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:03.400 [2024-10-15 04:42:52.611933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3164.685 ms 00:19:03.400 [2024-10-15 04:42:52.611948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.400 [2024-10-15 04:42:52.651379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.400 [2024-10-15 04:42:52.651437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:03.400 [2024-10-15 04:42:52.651460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.225 ms 00:19:03.400 [2024-10-15 04:42:52.651471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.400 [2024-10-15 04:42:52.651638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.400 [2024-10-15 04:42:52.651654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:03.400 [2024-10-15 04:42:52.651671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:03.400 [2024-10-15 04:42:52.651682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.400 [2024-10-15 04:42:52.713465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.401 [2024-10-15 04:42:52.713522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:03.401 [2024-10-15 04:42:52.713542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.836 ms 00:19:03.401 [2024-10-15 04:42:52.713553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.401 [2024-10-15 04:42:52.713602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.401 [2024-10-15 04:42:52.713614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:03.401 [2024-10-15 04:42:52.713628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:03.401 [2024-10-15 04:42:52.713641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.401 [2024-10-15 04:42:52.714160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.401 [2024-10-15 04:42:52.714178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:03.401 [2024-10-15 04:42:52.714191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:19:03.401 [2024-10-15 04:42:52.714202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.401 [2024-10-15 04:42:52.714315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.401 [2024-10-15 04:42:52.714328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:03.401 [2024-10-15 04:42:52.714344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:03.401 [2024-10-15 04:42:52.714354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.401 [2024-10-15 04:42:52.734309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.401 [2024-10-15 04:42:52.734352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:03.401 [2024-10-15 04:42:52.734370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.966 ms 00:19:03.401 [2024-10-15 04:42:52.734381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.401 [2024-10-15 04:42:52.747355] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:03.401 [2024-10-15 04:42:52.753403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.401 [2024-10-15 04:42:52.753447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:03.401 [2024-10-15 04:42:52.753462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.956 ms 00:19:03.401 [2024-10-15 04:42:52.753475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.401 [2024-10-15 04:42:52.841275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.401 [2024-10-15 04:42:52.841344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:03.401 [2024-10-15 04:42:52.841360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.888 ms 00:19:03.401 [2024-10-15 04:42:52.841390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.401 [2024-10-15 04:42:52.841586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.401 [2024-10-15 04:42:52.841606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:03.401 [2024-10-15 04:42:52.841617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:03.401 [2024-10-15 04:42:52.841630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.401 [2024-10-15 04:42:52.879809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.401 [2024-10-15 04:42:52.879874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:03.401 [2024-10-15 04:42:52.879889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.186 ms 00:19:03.401 [2024-10-15 04:42:52.879903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.659 [2024-10-15 04:42:52.916491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.659 [2024-10-15 04:42:52.916538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:03.659 [2024-10-15 04:42:52.916553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.605 ms 00:19:03.659 [2024-10-15 04:42:52.916567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.659 [2024-10-15 04:42:52.917367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.660 [2024-10-15 04:42:52.917452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:03.660 [2024-10-15 04:42:52.917469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:19:03.660 [2024-10-15 04:42:52.917483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.660 [2024-10-15 04:42:53.019019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.660 [2024-10-15 04:42:53.019092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:03.660 [2024-10-15 04:42:53.019109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.628 ms 00:19:03.660 [2024-10-15 04:42:53.019122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.660 [2024-10-15 04:42:53.058711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.660 [2024-10-15 04:42:53.058929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:03.660 [2024-10-15 04:42:53.058954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.545 ms 00:19:03.660 [2024-10-15 04:42:53.058969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.660 [2024-10-15 04:42:53.097465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.660 [2024-10-15 04:42:53.097725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:03.660 [2024-10-15 04:42:53.097751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.453 ms 00:19:03.660 [2024-10-15 04:42:53.097765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.660 [2024-10-15 04:42:53.135708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.660 [2024-10-15 04:42:53.135757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:03.660 [2024-10-15 04:42:53.135772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.925 ms 00:19:03.660 [2024-10-15 04:42:53.135785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.660 [2024-10-15 04:42:53.135843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.660 [2024-10-15 04:42:53.135866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:03.660 [2024-10-15 04:42:53.135877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:03.660 [2024-10-15 04:42:53.135890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.660 [2024-10-15 04:42:53.135997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.660 [2024-10-15 04:42:53.136018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:03.660 [2024-10-15 04:42:53.136028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:03.660 [2024-10-15 04:42:53.136041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.660 [2024-10-15 04:42:53.137169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3709.168 ms, result 0 00:19:03.660 { 00:19:03.660 "name": "ftl0", 00:19:03.660 "uuid": "2742c012-cf6c-435c-90c7-91fc4b266dba" 00:19:03.660 } 00:19:03.919 04:42:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:03.919 04:42:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:03.919 04:42:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:03.919 04:42:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:04.178 [2024-10-15 04:42:53.493401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:04.178 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:04.178 Zero copy mechanism will not be used. 00:19:04.178 Running I/O for 4 seconds... 00:19:06.042 1734.00 IOPS, 115.15 MiB/s [2024-10-15T04:42:56.917Z] 1764.50 IOPS, 117.17 MiB/s [2024-10-15T04:42:57.853Z] 1785.33 IOPS, 118.56 MiB/s [2024-10-15T04:42:57.853Z] 1787.25 IOPS, 118.68 MiB/s 00:19:08.349 Latency(us) 00:19:08.349 [2024-10-15T04:42:57.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.349 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:08.349 ftl0 : 4.00 1786.74 118.65 0.00 0.00 587.39 236.88 2171.37 00:19:08.349 [2024-10-15T04:42:57.853Z] =================================================================================================================== 00:19:08.349 [2024-10-15T04:42:57.853Z] Total : 1786.74 118.65 0.00 0.00 587.39 236.88 2171.37 00:19:08.349 [2024-10-15 04:42:57.498637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:08.349 { 00:19:08.349 "results": [ 00:19:08.349 { 00:19:08.349 "job": "ftl0", 00:19:08.349 "core_mask": "0x1", 00:19:08.349 "workload": "randwrite", 00:19:08.349 "status": "finished", 00:19:08.349 "queue_depth": 1, 00:19:08.349 "io_size": 69632, 00:19:08.349 "runtime": 4.001707, 00:19:08.349 "iops": 1786.7375097677066, 00:19:08.349 "mibps": 118.65053775801177, 00:19:08.349 "io_failed": 0, 00:19:08.349 "io_timeout": 0, 00:19:08.349 "avg_latency_us": 587.3872440812199, 00:19:08.349 "min_latency_us": 236.87710843373495, 00:19:08.349 "max_latency_us": 2171.373493975904 00:19:08.349 } 00:19:08.349 ], 00:19:08.349 "core_count": 1 00:19:08.349 } 00:19:08.349 04:42:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:08.349 [2024-10-15 04:42:57.631573] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:08.349 Running I/O for 4 seconds... 00:19:10.225 11115.00 IOPS, 43.42 MiB/s [2024-10-15T04:43:00.665Z] 11128.50 IOPS, 43.47 MiB/s [2024-10-15T04:43:02.042Z] 11170.67 IOPS, 43.64 MiB/s [2024-10-15T04:43:02.042Z] 11048.50 IOPS, 43.16 MiB/s 00:19:12.538 Latency(us) 00:19:12.538 [2024-10-15T04:43:02.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.538 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:12.538 ftl0 : 4.02 11015.56 43.03 0.00 0.00 11586.27 220.43 34320.86 00:19:12.538 [2024-10-15T04:43:02.042Z] =================================================================================================================== 00:19:12.538 [2024-10-15T04:43:02.042Z] Total : 11015.56 43.03 0.00 0.00 11586.27 0.00 34320.86 00:19:12.538 [2024-10-15 04:43:01.659528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:12.538 { 00:19:12.538 "results": [ 00:19:12.538 { 00:19:12.538 "job": "ftl0", 00:19:12.538 "core_mask": "0x1", 00:19:12.538 "workload": "randwrite", 00:19:12.538 "status": "finished", 00:19:12.538 "queue_depth": 128, 00:19:12.538 "io_size": 4096, 00:19:12.538 "runtime": 4.023581, 00:19:12.538 "iops": 11015.560516862965, 00:19:12.538 "mibps": 43.029533268995955, 00:19:12.538 "io_failed": 0, 00:19:12.538 "io_timeout": 0, 00:19:12.538 "avg_latency_us": 11586.273656695279, 00:19:12.538 "min_latency_us": 220.4273092369478, 00:19:12.538 "max_latency_us": 34320.86104417671 00:19:12.538 } 00:19:12.538 ], 00:19:12.538 "core_count": 1 00:19:12.538 } 00:19:12.538 04:43:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:12.538 [2024-10-15 04:43:01.774084] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:12.538 Running I/O for 4 seconds... 00:19:14.407 7823.00 IOPS, 30.56 MiB/s [2024-10-15T04:43:04.846Z] 8105.00 IOPS, 31.66 MiB/s [2024-10-15T04:43:05.781Z] 8007.33 IOPS, 31.28 MiB/s [2024-10-15T04:43:06.040Z] 7989.75 IOPS, 31.21 MiB/s 00:19:16.536 Latency(us) 00:19:16.536 [2024-10-15T04:43:06.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.536 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:16.536 Verification LBA range: start 0x0 length 0x1400000 00:19:16.536 ftl0 : 4.01 8000.60 31.25 0.00 0.00 15950.56 263.20 33899.75 00:19:16.536 [2024-10-15T04:43:06.040Z] =================================================================================================================== 00:19:16.536 [2024-10-15T04:43:06.040Z] Total : 8000.60 31.25 0.00 0.00 15950.56 0.00 33899.75 00:19:16.536 { 00:19:16.536 "results": [ 00:19:16.536 { 00:19:16.536 "job": "ftl0", 00:19:16.536 "core_mask": "0x1", 00:19:16.536 "workload": "verify", 00:19:16.536 "status": "finished", 00:19:16.536 "verify_range": { 00:19:16.536 "start": 0, 00:19:16.536 "length": 20971520 00:19:16.536 }, 00:19:16.536 "queue_depth": 128, 00:19:16.536 "io_size": 4096, 00:19:16.536 "runtime": 4.010447, 00:19:16.536 "iops": 8000.604421402402, 00:19:16.536 "mibps": 31.252361021103134, 00:19:16.536 "io_failed": 0, 00:19:16.536 "io_timeout": 0, 00:19:16.536 "avg_latency_us": 15950.562171793825, 00:19:16.536 "min_latency_us": 263.19678714859435, 00:19:16.536 "max_latency_us": 33899.74618473896 00:19:16.536 } 00:19:16.536 ], 00:19:16.536 "core_count": 1 00:19:16.536 } 00:19:16.536 [2024-10-15 04:43:05.798178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:16.536 04:43:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:16.536 [2024-10-15 04:43:06.030103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.536 [2024-10-15 04:43:06.030378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:16.536 [2024-10-15 04:43:06.030411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:16.536 [2024-10-15 04:43:06.030425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.536 [2024-10-15 04:43:06.030466] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.536 [2024-10-15 04:43:06.034904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.536 [2024-10-15 04:43:06.034938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:16.536 [2024-10-15 04:43:06.034956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.422 ms 00:19:16.536 [2024-10-15 04:43:06.034967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.536 [2024-10-15 04:43:06.036761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.536 [2024-10-15 04:43:06.036802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:16.536 [2024-10-15 04:43:06.036834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.764 ms 00:19:16.536 [2024-10-15 04:43:06.036846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.794 [2024-10-15 04:43:06.221308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.794 [2024-10-15 04:43:06.221385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:16.794 [2024-10-15 04:43:06.221411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 184.706 ms 00:19:16.794 [2024-10-15 04:43:06.221423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.794 [2024-10-15 04:43:06.226890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.794 [2024-10-15 04:43:06.227058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:16.794 [2024-10-15 04:43:06.227089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.426 ms 00:19:16.794 [2024-10-15 04:43:06.227100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.794 [2024-10-15 04:43:06.266536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.794 [2024-10-15 04:43:06.266592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:16.794 [2024-10-15 04:43:06.266612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.413 ms 00:19:16.794 [2024-10-15 04:43:06.266640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.794 [2024-10-15 04:43:06.290432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.794 [2024-10-15 04:43:06.290493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:16.794 [2024-10-15 04:43:06.290517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.766 ms 00:19:16.794 [2024-10-15 04:43:06.290529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.794 [2024-10-15 04:43:06.290707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.794 [2024-10-15 04:43:06.290723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:16.794 [2024-10-15 04:43:06.290741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:19:16.794 [2024-10-15 04:43:06.290752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.053 [2024-10-15 04:43:06.331018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.053 [2024-10-15 04:43:06.331079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:17.053 [2024-10-15 04:43:06.331099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.303 ms 00:19:17.053 [2024-10-15 04:43:06.331126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.053 [2024-10-15 04:43:06.369751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.054 [2024-10-15 04:43:06.370016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:17.054 [2024-10-15 04:43:06.370048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.627 ms 00:19:17.054 [2024-10-15 04:43:06.370059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.054 [2024-10-15 04:43:06.410922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.054 [2024-10-15 04:43:06.410984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:17.054 [2024-10-15 04:43:06.411005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.814 ms 00:19:17.054 [2024-10-15 04:43:06.411016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.054 [2024-10-15 04:43:06.452436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.054 [2024-10-15 04:43:06.452518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:17.054 [2024-10-15 04:43:06.452543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.250 ms 00:19:17.054 [2024-10-15 04:43:06.452555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.054 [2024-10-15 04:43:06.452620] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:17.054 [2024-10-15 04:43:06.452640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.452989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:17.054 [2024-10-15 04:43:06.453824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.453995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.454009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.454021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.454035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:17.055 [2024-10-15 04:43:06.454055] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:17.055 [2024-10-15 04:43:06.454069] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2742c012-cf6c-435c-90c7-91fc4b266dba 00:19:17.055 [2024-10-15 04:43:06.454081] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:17.055 [2024-10-15 04:43:06.454095] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:17.055 [2024-10-15 04:43:06.454109] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:17.055 [2024-10-15 04:43:06.454123] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:17.055 [2024-10-15 04:43:06.454134] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:17.055 [2024-10-15 04:43:06.454148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:17.055 [2024-10-15 04:43:06.454158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:17.055 [2024-10-15 04:43:06.454174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:17.055 [2024-10-15 04:43:06.454184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:17.055 [2024-10-15 04:43:06.454198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.055 [2024-10-15 04:43:06.454209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:17.055 [2024-10-15 04:43:06.454224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:19:17.055 [2024-10-15 04:43:06.454235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.055 [2024-10-15 04:43:06.475893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.055 [2024-10-15 04:43:06.475955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:17.055 [2024-10-15 04:43:06.475975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.599 ms 00:19:17.055 [2024-10-15 04:43:06.475986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.055 [2024-10-15 04:43:06.476554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.055 [2024-10-15 04:43:06.476575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:17.055 [2024-10-15 04:43:06.476590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:19:17.055 [2024-10-15 04:43:06.476600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.055 [2024-10-15 04:43:06.535673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.055 [2024-10-15 04:43:06.535909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.055 [2024-10-15 04:43:06.535946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.055 [2024-10-15 04:43:06.535958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.055 [2024-10-15 04:43:06.536040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.055 [2024-10-15 04:43:06.536051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.055 [2024-10-15 04:43:06.536065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.055 [2024-10-15 04:43:06.536076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.055 [2024-10-15 04:43:06.536208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.055 [2024-10-15 04:43:06.536225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.055 [2024-10-15 04:43:06.536239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.055 [2024-10-15 04:43:06.536250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.055 [2024-10-15 04:43:06.536272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.055 [2024-10-15 04:43:06.536283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.055 [2024-10-15 04:43:06.536296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.055 [2024-10-15 04:43:06.536307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.314 [2024-10-15 04:43:06.668473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.314 [2024-10-15 04:43:06.668540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.314 [2024-10-15 04:43:06.668561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.314 [2024-10-15 04:43:06.668572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.314 [2024-10-15 04:43:06.775805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.314 [2024-10-15 04:43:06.775891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.314 [2024-10-15 04:43:06.775910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.314 [2024-10-15 04:43:06.775922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.314 [2024-10-15 04:43:06.776039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.314 [2024-10-15 04:43:06.776053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.314 [2024-10-15 04:43:06.776070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.314 [2024-10-15 04:43:06.776080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.314 [2024-10-15 04:43:06.776138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.314 [2024-10-15 04:43:06.776151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.314 [2024-10-15 04:43:06.776164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.314 [2024-10-15 04:43:06.776174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.314 [2024-10-15 04:43:06.776290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.314 [2024-10-15 04:43:06.776304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.314 [2024-10-15 04:43:06.776336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.314 [2024-10-15 04:43:06.776351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.314 [2024-10-15 04:43:06.776392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.314 [2024-10-15 04:43:06.776405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:17.314 [2024-10-15 04:43:06.776419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.314 [2024-10-15 04:43:06.776430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.314 [2024-10-15 04:43:06.776482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.314 [2024-10-15 04:43:06.776494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.314 [2024-10-15 04:43:06.776507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.314 [2024-10-15 04:43:06.776520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.314 [2024-10-15 04:43:06.776563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.314 [2024-10-15 04:43:06.776585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.314 [2024-10-15 04:43:06.776616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.314 [2024-10-15 04:43:06.776626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.314 [2024-10-15 04:43:06.776753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 747.823 ms, result 0 00:19:17.314 true 00:19:17.314 04:43:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75329 00:19:17.314 04:43:06 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 75329 ']' 00:19:17.314 04:43:06 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 75329 00:19:17.314 04:43:06 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:19:17.314 04:43:06 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:17.572 04:43:06 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75329 00:19:17.572 killing process with pid 75329 00:19:17.572 Received shutdown signal, test time was about 4.000000 seconds 00:19:17.572 00:19:17.572 Latency(us) 00:19:17.572 [2024-10-15T04:43:07.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.572 [2024-10-15T04:43:07.076Z] =================================================================================================================== 00:19:17.572 [2024-10-15T04:43:07.076Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.572 04:43:06 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:17.572 04:43:06 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:17.572 04:43:06 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75329' 00:19:17.572 04:43:06 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 75329 00:19:17.572 04:43:06 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 75329 00:19:21.030 04:43:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:21.030 04:43:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:21.030 Remove shared memory files 00:19:21.030 04:43:10 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:21.030 04:43:10 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:21.030 04:43:10 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:21.030 04:43:10 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:21.030 04:43:10 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:21.030 04:43:10 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:21.030 ************************************ 00:19:21.030 END TEST ftl_bdevperf 00:19:21.030 ************************************ 00:19:21.030 00:19:21.030 real 0m25.493s 00:19:21.030 user 0m28.204s 00:19:21.030 sys 0m1.332s 00:19:21.030 04:43:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:21.030 04:43:10 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:21.289 04:43:10 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:21.289 04:43:10 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:21.289 04:43:10 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:21.289 04:43:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:21.289 ************************************ 00:19:21.289 START TEST ftl_trim 00:19:21.289 ************************************ 00:19:21.289 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:21.289 * Looking for test storage... 00:19:21.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:21.289 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:21.289 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:19:21.289 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:21.289 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:21.289 04:43:10 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.289 04:43:10 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.289 04:43:10 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.289 04:43:10 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.289 04:43:10 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.289 04:43:10 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.289 04:43:10 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.289 04:43:10 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.289 04:43:10 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.547 04:43:10 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:21.548 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.548 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:21.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.548 --rc genhtml_branch_coverage=1 00:19:21.548 --rc genhtml_function_coverage=1 00:19:21.548 --rc genhtml_legend=1 00:19:21.548 --rc geninfo_all_blocks=1 00:19:21.548 --rc geninfo_unexecuted_blocks=1 00:19:21.548 00:19:21.548 ' 00:19:21.548 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:21.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.548 --rc genhtml_branch_coverage=1 00:19:21.548 --rc genhtml_function_coverage=1 00:19:21.548 --rc genhtml_legend=1 00:19:21.548 --rc geninfo_all_blocks=1 00:19:21.548 --rc geninfo_unexecuted_blocks=1 00:19:21.548 00:19:21.548 ' 00:19:21.548 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:21.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.548 --rc genhtml_branch_coverage=1 00:19:21.548 --rc genhtml_function_coverage=1 00:19:21.548 --rc genhtml_legend=1 00:19:21.548 --rc geninfo_all_blocks=1 00:19:21.548 --rc geninfo_unexecuted_blocks=1 00:19:21.548 00:19:21.548 ' 00:19:21.548 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:21.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.548 --rc genhtml_branch_coverage=1 00:19:21.548 --rc genhtml_function_coverage=1 00:19:21.548 --rc genhtml_legend=1 00:19:21.548 --rc geninfo_all_blocks=1 00:19:21.548 --rc geninfo_unexecuted_blocks=1 00:19:21.548 00:19:21.548 ' 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75692 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75692 00:19:21.548 04:43:10 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:21.548 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 75692 ']' 00:19:21.548 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.548 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.548 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.548 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.548 04:43:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:21.548 [2024-10-15 04:43:10.966356] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:19:21.548 [2024-10-15 04:43:10.966481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75692 ] 00:19:21.806 [2024-10-15 04:43:11.141508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.806 [2024-10-15 04:43:11.268864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.806 [2024-10-15 04:43:11.268966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.806 [2024-10-15 04:43:11.269002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.741 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.741 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:22.741 04:43:12 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:22.741 04:43:12 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:22.741 04:43:12 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:22.741 04:43:12 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:22.741 04:43:12 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:22.741 04:43:12 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:22.999 04:43:12 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:22.999 04:43:12 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:22.999 04:43:12 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:22.999 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:22.999 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:22.999 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:22.999 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:22.999 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:23.258 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:23.258 { 00:19:23.258 "name": "nvme0n1", 00:19:23.258 "aliases": [ 00:19:23.258 "3c49ad39-c706-4b23-a2d1-2ea2a36b9f7f" 00:19:23.258 ], 00:19:23.258 "product_name": "NVMe disk", 00:19:23.258 "block_size": 4096, 00:19:23.258 "num_blocks": 1310720, 00:19:23.258 "uuid": "3c49ad39-c706-4b23-a2d1-2ea2a36b9f7f", 00:19:23.258 "numa_id": -1, 00:19:23.258 "assigned_rate_limits": { 00:19:23.258 "rw_ios_per_sec": 0, 00:19:23.258 "rw_mbytes_per_sec": 0, 00:19:23.258 "r_mbytes_per_sec": 0, 00:19:23.258 "w_mbytes_per_sec": 0 00:19:23.258 }, 00:19:23.258 "claimed": true, 00:19:23.258 "claim_type": "read_many_write_one", 00:19:23.258 "zoned": false, 00:19:23.258 "supported_io_types": { 00:19:23.258 "read": true, 00:19:23.258 "write": true, 00:19:23.258 "unmap": true, 00:19:23.258 "flush": true, 00:19:23.258 "reset": true, 00:19:23.258 "nvme_admin": true, 00:19:23.258 "nvme_io": true, 00:19:23.258 "nvme_io_md": false, 00:19:23.258 "write_zeroes": true, 00:19:23.258 "zcopy": false, 00:19:23.258 "get_zone_info": false, 00:19:23.258 "zone_management": false, 00:19:23.258 "zone_append": false, 00:19:23.258 "compare": true, 00:19:23.258 "compare_and_write": false, 00:19:23.258 "abort": true, 00:19:23.258 "seek_hole": false, 00:19:23.258 "seek_data": false, 00:19:23.258 "copy": true, 00:19:23.258 "nvme_iov_md": false 00:19:23.258 }, 00:19:23.258 "driver_specific": { 00:19:23.258 "nvme": [ 00:19:23.258 { 00:19:23.258 "pci_address": "0000:00:11.0", 00:19:23.258 "trid": { 00:19:23.258 "trtype": "PCIe", 00:19:23.258 "traddr": "0000:00:11.0" 00:19:23.258 }, 00:19:23.258 "ctrlr_data": { 00:19:23.258 "cntlid": 0, 00:19:23.258 "vendor_id": "0x1b36", 00:19:23.258 "model_number": "QEMU NVMe Ctrl", 00:19:23.258 "serial_number": "12341", 00:19:23.258 "firmware_revision": "8.0.0", 00:19:23.258 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:23.258 "oacs": { 00:19:23.258 "security": 0, 00:19:23.258 "format": 1, 00:19:23.258 "firmware": 0, 00:19:23.258 "ns_manage": 1 00:19:23.258 }, 00:19:23.258 "multi_ctrlr": false, 00:19:23.258 "ana_reporting": false 00:19:23.258 }, 00:19:23.258 "vs": { 00:19:23.258 "nvme_version": "1.4" 00:19:23.258 }, 00:19:23.258 "ns_data": { 00:19:23.258 "id": 1, 00:19:23.258 "can_share": false 00:19:23.258 } 00:19:23.258 } 00:19:23.258 ], 00:19:23.258 "mp_policy": "active_passive" 00:19:23.258 } 00:19:23.258 } 00:19:23.258 ]' 00:19:23.258 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:23.258 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:23.258 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:23.258 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:23.258 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:23.258 04:43:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:19:23.258 04:43:12 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:23.258 04:43:12 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:23.258 04:43:12 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:23.527 04:43:12 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:23.527 04:43:12 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:23.527 04:43:12 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=d862c6c3-f3e6-433a-b492-b1002f8124d0 00:19:23.527 04:43:12 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:23.527 04:43:12 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d862c6c3-f3e6-433a-b492-b1002f8124d0 00:19:23.786 04:43:13 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:24.045 04:43:13 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=5e683ac7-03d8-4e29-b8d5-6aa9a4d8f12e 00:19:24.045 04:43:13 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5e683ac7-03d8-4e29-b8d5-6aa9a4d8f12e 00:19:24.304 04:43:13 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=acc6871d-fef9-47a2-9165-c122826caef8 00:19:24.304 04:43:13 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 acc6871d-fef9-47a2-9165-c122826caef8 00:19:24.304 04:43:13 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:24.304 04:43:13 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:24.304 04:43:13 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=acc6871d-fef9-47a2-9165-c122826caef8 00:19:24.304 04:43:13 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:24.304 04:43:13 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size acc6871d-fef9-47a2-9165-c122826caef8 00:19:24.304 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=acc6871d-fef9-47a2-9165-c122826caef8 00:19:24.304 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:24.304 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:24.304 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:24.304 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b acc6871d-fef9-47a2-9165-c122826caef8 00:19:24.563 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:24.563 { 00:19:24.563 "name": "acc6871d-fef9-47a2-9165-c122826caef8", 00:19:24.563 "aliases": [ 00:19:24.563 "lvs/nvme0n1p0" 00:19:24.563 ], 00:19:24.563 "product_name": "Logical Volume", 00:19:24.563 "block_size": 4096, 00:19:24.563 "num_blocks": 26476544, 00:19:24.563 "uuid": "acc6871d-fef9-47a2-9165-c122826caef8", 00:19:24.563 "assigned_rate_limits": { 00:19:24.563 "rw_ios_per_sec": 0, 00:19:24.563 "rw_mbytes_per_sec": 0, 00:19:24.563 "r_mbytes_per_sec": 0, 00:19:24.563 "w_mbytes_per_sec": 0 00:19:24.563 }, 00:19:24.563 "claimed": false, 00:19:24.563 "zoned": false, 00:19:24.563 "supported_io_types": { 00:19:24.563 "read": true, 00:19:24.563 "write": true, 00:19:24.563 "unmap": true, 00:19:24.563 "flush": false, 00:19:24.563 "reset": true, 00:19:24.563 "nvme_admin": false, 00:19:24.563 "nvme_io": false, 00:19:24.563 "nvme_io_md": false, 00:19:24.563 "write_zeroes": true, 00:19:24.563 "zcopy": false, 00:19:24.563 "get_zone_info": false, 00:19:24.563 "zone_management": false, 00:19:24.563 "zone_append": false, 00:19:24.563 "compare": false, 00:19:24.563 "compare_and_write": false, 00:19:24.563 "abort": false, 00:19:24.563 "seek_hole": true, 00:19:24.563 "seek_data": true, 00:19:24.563 "copy": false, 00:19:24.563 "nvme_iov_md": false 00:19:24.563 }, 00:19:24.563 "driver_specific": { 00:19:24.563 "lvol": { 00:19:24.563 "lvol_store_uuid": "5e683ac7-03d8-4e29-b8d5-6aa9a4d8f12e", 00:19:24.563 "base_bdev": "nvme0n1", 00:19:24.563 "thin_provision": true, 00:19:24.563 "num_allocated_clusters": 0, 00:19:24.563 "snapshot": false, 00:19:24.563 "clone": false, 00:19:24.563 "esnap_clone": false 00:19:24.563 } 00:19:24.563 } 00:19:24.563 } 00:19:24.563 ]' 00:19:24.563 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:24.563 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:24.563 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:24.563 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:24.563 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:24.563 04:43:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:24.563 04:43:13 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:24.563 04:43:13 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:24.563 04:43:13 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:24.822 04:43:14 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:24.822 04:43:14 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:24.822 04:43:14 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size acc6871d-fef9-47a2-9165-c122826caef8 00:19:24.822 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=acc6871d-fef9-47a2-9165-c122826caef8 00:19:24.822 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:24.822 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:24.822 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:24.822 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b acc6871d-fef9-47a2-9165-c122826caef8 00:19:25.080 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:25.080 { 00:19:25.080 "name": "acc6871d-fef9-47a2-9165-c122826caef8", 00:19:25.080 "aliases": [ 00:19:25.080 "lvs/nvme0n1p0" 00:19:25.080 ], 00:19:25.080 "product_name": "Logical Volume", 00:19:25.080 "block_size": 4096, 00:19:25.080 "num_blocks": 26476544, 00:19:25.080 "uuid": "acc6871d-fef9-47a2-9165-c122826caef8", 00:19:25.080 "assigned_rate_limits": { 00:19:25.080 "rw_ios_per_sec": 0, 00:19:25.080 "rw_mbytes_per_sec": 0, 00:19:25.080 "r_mbytes_per_sec": 0, 00:19:25.080 "w_mbytes_per_sec": 0 00:19:25.080 }, 00:19:25.080 "claimed": false, 00:19:25.080 "zoned": false, 00:19:25.080 "supported_io_types": { 00:19:25.080 "read": true, 00:19:25.080 "write": true, 00:19:25.080 "unmap": true, 00:19:25.080 "flush": false, 00:19:25.080 "reset": true, 00:19:25.080 "nvme_admin": false, 00:19:25.080 "nvme_io": false, 00:19:25.080 "nvme_io_md": false, 00:19:25.080 "write_zeroes": true, 00:19:25.080 "zcopy": false, 00:19:25.080 "get_zone_info": false, 00:19:25.080 "zone_management": false, 00:19:25.080 "zone_append": false, 00:19:25.080 "compare": false, 00:19:25.080 "compare_and_write": false, 00:19:25.080 "abort": false, 00:19:25.080 "seek_hole": true, 00:19:25.080 "seek_data": true, 00:19:25.080 "copy": false, 00:19:25.080 "nvme_iov_md": false 00:19:25.080 }, 00:19:25.080 "driver_specific": { 00:19:25.080 "lvol": { 00:19:25.080 "lvol_store_uuid": "5e683ac7-03d8-4e29-b8d5-6aa9a4d8f12e", 00:19:25.080 "base_bdev": "nvme0n1", 00:19:25.080 "thin_provision": true, 00:19:25.080 "num_allocated_clusters": 0, 00:19:25.080 "snapshot": false, 00:19:25.080 "clone": false, 00:19:25.080 "esnap_clone": false 00:19:25.080 } 00:19:25.080 } 00:19:25.080 } 00:19:25.080 ]' 00:19:25.080 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:25.080 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:25.080 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:25.080 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:25.080 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:25.080 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:25.080 04:43:14 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:25.080 04:43:14 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:25.339 04:43:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:25.339 04:43:14 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:25.339 04:43:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size acc6871d-fef9-47a2-9165-c122826caef8 00:19:25.339 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=acc6871d-fef9-47a2-9165-c122826caef8 00:19:25.339 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:25.339 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:25.339 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:25.339 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b acc6871d-fef9-47a2-9165-c122826caef8 00:19:25.598 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:25.598 { 00:19:25.598 "name": "acc6871d-fef9-47a2-9165-c122826caef8", 00:19:25.598 "aliases": [ 00:19:25.598 "lvs/nvme0n1p0" 00:19:25.598 ], 00:19:25.598 "product_name": "Logical Volume", 00:19:25.598 "block_size": 4096, 00:19:25.598 "num_blocks": 26476544, 00:19:25.598 "uuid": "acc6871d-fef9-47a2-9165-c122826caef8", 00:19:25.598 "assigned_rate_limits": { 00:19:25.598 "rw_ios_per_sec": 0, 00:19:25.598 "rw_mbytes_per_sec": 0, 00:19:25.598 "r_mbytes_per_sec": 0, 00:19:25.598 "w_mbytes_per_sec": 0 00:19:25.598 }, 00:19:25.598 "claimed": false, 00:19:25.598 "zoned": false, 00:19:25.598 "supported_io_types": { 00:19:25.598 "read": true, 00:19:25.598 "write": true, 00:19:25.598 "unmap": true, 00:19:25.598 "flush": false, 00:19:25.598 "reset": true, 00:19:25.598 "nvme_admin": false, 00:19:25.598 "nvme_io": false, 00:19:25.598 "nvme_io_md": false, 00:19:25.598 "write_zeroes": true, 00:19:25.598 "zcopy": false, 00:19:25.598 "get_zone_info": false, 00:19:25.598 "zone_management": false, 00:19:25.598 "zone_append": false, 00:19:25.598 "compare": false, 00:19:25.598 "compare_and_write": false, 00:19:25.598 "abort": false, 00:19:25.598 "seek_hole": true, 00:19:25.598 "seek_data": true, 00:19:25.598 "copy": false, 00:19:25.598 "nvme_iov_md": false 00:19:25.598 }, 00:19:25.598 "driver_specific": { 00:19:25.598 "lvol": { 00:19:25.598 "lvol_store_uuid": "5e683ac7-03d8-4e29-b8d5-6aa9a4d8f12e", 00:19:25.598 "base_bdev": "nvme0n1", 00:19:25.598 "thin_provision": true, 00:19:25.598 "num_allocated_clusters": 0, 00:19:25.598 "snapshot": false, 00:19:25.598 "clone": false, 00:19:25.598 "esnap_clone": false 00:19:25.598 } 00:19:25.598 } 00:19:25.598 } 00:19:25.598 ]' 00:19:25.598 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:25.598 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:25.598 04:43:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:25.598 04:43:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:25.598 04:43:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:25.598 04:43:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:25.598 04:43:15 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:25.598 04:43:15 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d acc6871d-fef9-47a2-9165-c122826caef8 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:25.858 [2024-10-15 04:43:15.210432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.858 [2024-10-15 04:43:15.210494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:25.858 [2024-10-15 04:43:15.210515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:25.858 [2024-10-15 04:43:15.210527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.858 [2024-10-15 04:43:15.213900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.858 [2024-10-15 04:43:15.213948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.858 [2024-10-15 04:43:15.213968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.346 ms 00:19:25.858 [2024-10-15 04:43:15.213978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.858 [2024-10-15 04:43:15.214139] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:25.858 [2024-10-15 04:43:15.215161] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:25.858 [2024-10-15 04:43:15.215200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.858 [2024-10-15 04:43:15.215211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.858 [2024-10-15 04:43:15.215225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:19:25.858 [2024-10-15 04:43:15.215236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.858 [2024-10-15 04:43:15.215347] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 88ef924c-9641-4650-a021-afc46cb8fbb7 00:19:25.858 [2024-10-15 04:43:15.216802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.858 [2024-10-15 04:43:15.216847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:25.858 [2024-10-15 04:43:15.216861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:25.858 [2024-10-15 04:43:15.216876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.858 [2024-10-15 04:43:15.224420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.858 [2024-10-15 04:43:15.224608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.858 [2024-10-15 04:43:15.224632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.476 ms 00:19:25.858 [2024-10-15 04:43:15.224646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.858 [2024-10-15 04:43:15.224852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.858 [2024-10-15 04:43:15.224872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.858 [2024-10-15 04:43:15.224885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:25.858 [2024-10-15 04:43:15.224902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.858 [2024-10-15 04:43:15.224944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.858 [2024-10-15 04:43:15.224959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:25.858 [2024-10-15 04:43:15.224970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:25.858 [2024-10-15 04:43:15.224983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.858 [2024-10-15 04:43:15.225022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:25.858 [2024-10-15 04:43:15.230317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.858 [2024-10-15 04:43:15.230353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.858 [2024-10-15 04:43:15.230370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.305 ms 00:19:25.858 [2024-10-15 04:43:15.230381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.858 [2024-10-15 04:43:15.230457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.858 [2024-10-15 04:43:15.230470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:25.858 [2024-10-15 04:43:15.230484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:25.858 [2024-10-15 04:43:15.230512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.858 [2024-10-15 04:43:15.230551] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:25.858 [2024-10-15 04:43:15.230681] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:25.858 [2024-10-15 04:43:15.230702] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:25.858 [2024-10-15 04:43:15.230716] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:25.858 [2024-10-15 04:43:15.230732] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:25.858 [2024-10-15 04:43:15.230745] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:25.858 [2024-10-15 04:43:15.230760] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:25.858 [2024-10-15 04:43:15.230771] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:25.858 [2024-10-15 04:43:15.230784] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:25.858 [2024-10-15 04:43:15.230794] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:25.858 [2024-10-15 04:43:15.230807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.858 [2024-10-15 04:43:15.230842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:25.858 [2024-10-15 04:43:15.230861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:19:25.858 [2024-10-15 04:43:15.230872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.858 [2024-10-15 04:43:15.230975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.858 [2024-10-15 04:43:15.230987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:25.858 [2024-10-15 04:43:15.231001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:25.858 [2024-10-15 04:43:15.231012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.858 [2024-10-15 04:43:15.231134] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:25.858 [2024-10-15 04:43:15.231147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:25.858 [2024-10-15 04:43:15.231161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.858 [2024-10-15 04:43:15.231175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.858 [2024-10-15 04:43:15.231189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:25.858 [2024-10-15 04:43:15.231199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:25.858 [2024-10-15 04:43:15.231211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:25.858 [2024-10-15 04:43:15.231221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:25.858 [2024-10-15 04:43:15.231234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:25.858 [2024-10-15 04:43:15.231244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.858 [2024-10-15 04:43:15.231257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:25.858 [2024-10-15 04:43:15.231266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:25.858 [2024-10-15 04:43:15.231279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.858 [2024-10-15 04:43:15.231288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:25.858 [2024-10-15 04:43:15.231301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:25.858 [2024-10-15 04:43:15.231311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.858 [2024-10-15 04:43:15.231327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:25.858 [2024-10-15 04:43:15.231337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:25.858 [2024-10-15 04:43:15.231351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.858 [2024-10-15 04:43:15.231361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:25.858 [2024-10-15 04:43:15.231373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:25.858 [2024-10-15 04:43:15.231384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.858 [2024-10-15 04:43:15.231396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:25.858 [2024-10-15 04:43:15.231406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:25.858 [2024-10-15 04:43:15.231418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.858 [2024-10-15 04:43:15.231428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:25.858 [2024-10-15 04:43:15.231440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:25.859 [2024-10-15 04:43:15.231450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.859 [2024-10-15 04:43:15.231462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:25.859 [2024-10-15 04:43:15.231472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:25.859 [2024-10-15 04:43:15.231484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.859 [2024-10-15 04:43:15.231494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:25.859 [2024-10-15 04:43:15.231509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:25.859 [2024-10-15 04:43:15.231518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.859 [2024-10-15 04:43:15.231531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:25.859 [2024-10-15 04:43:15.231541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:25.859 [2024-10-15 04:43:15.231553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.859 [2024-10-15 04:43:15.231563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:25.859 [2024-10-15 04:43:15.231576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:25.859 [2024-10-15 04:43:15.231586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.859 [2024-10-15 04:43:15.231598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:25.859 [2024-10-15 04:43:15.231607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:25.859 [2024-10-15 04:43:15.231619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.859 [2024-10-15 04:43:15.231629] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:25.859 [2024-10-15 04:43:15.231642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:25.859 [2024-10-15 04:43:15.231653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.859 [2024-10-15 04:43:15.231667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.859 [2024-10-15 04:43:15.231679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:25.859 [2024-10-15 04:43:15.231694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:25.859 [2024-10-15 04:43:15.231704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:25.859 [2024-10-15 04:43:15.231717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:25.859 [2024-10-15 04:43:15.231727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:25.859 [2024-10-15 04:43:15.231739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:25.859 [2024-10-15 04:43:15.231754] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:25.859 [2024-10-15 04:43:15.231770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.859 [2024-10-15 04:43:15.231782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:25.859 [2024-10-15 04:43:15.231796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:25.859 [2024-10-15 04:43:15.231807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:25.859 [2024-10-15 04:43:15.231830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:25.859 [2024-10-15 04:43:15.231842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:25.859 [2024-10-15 04:43:15.231855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:25.859 [2024-10-15 04:43:15.231866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:25.859 [2024-10-15 04:43:15.231879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:25.859 [2024-10-15 04:43:15.231890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:25.859 [2024-10-15 04:43:15.231906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:25.859 [2024-10-15 04:43:15.231917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:25.859 [2024-10-15 04:43:15.231931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:25.859 [2024-10-15 04:43:15.231942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:25.859 [2024-10-15 04:43:15.231955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:25.859 [2024-10-15 04:43:15.231966] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:25.859 [2024-10-15 04:43:15.231982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.859 [2024-10-15 04:43:15.231995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:25.859 [2024-10-15 04:43:15.232009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:25.859 [2024-10-15 04:43:15.232020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:25.859 [2024-10-15 04:43:15.232033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:25.859 [2024-10-15 04:43:15.232045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.859 [2024-10-15 04:43:15.232065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:25.859 [2024-10-15 04:43:15.232076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:19:25.859 [2024-10-15 04:43:15.232088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.859 [2024-10-15 04:43:15.232178] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:25.859 [2024-10-15 04:43:15.232197] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:29.147 [2024-10-15 04:43:18.199226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.199297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:29.147 [2024-10-15 04:43:18.199318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2971.863 ms 00:19:29.147 [2024-10-15 04:43:18.199331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.236967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.237246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.147 [2024-10-15 04:43:18.237274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.306 ms 00:19:29.147 [2024-10-15 04:43:18.237289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.237492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.237511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:29.147 [2024-10-15 04:43:18.237523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:29.147 [2024-10-15 04:43:18.237539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.298861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.298929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:29.147 [2024-10-15 04:43:18.298945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.363 ms 00:19:29.147 [2024-10-15 04:43:18.298963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.299095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.299112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:29.147 [2024-10-15 04:43:18.299124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:29.147 [2024-10-15 04:43:18.299137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.299612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.299630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:29.147 [2024-10-15 04:43:18.299642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:19:29.147 [2024-10-15 04:43:18.299655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.299787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.299803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:29.147 [2024-10-15 04:43:18.299814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:29.147 [2024-10-15 04:43:18.299831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.321667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.321954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:29.147 [2024-10-15 04:43:18.321983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.795 ms 00:19:29.147 [2024-10-15 04:43:18.322014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.335374] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:29.147 [2024-10-15 04:43:18.352361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.352630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:29.147 [2024-10-15 04:43:18.352663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.226 ms 00:19:29.147 [2024-10-15 04:43:18.352677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.447993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.448060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:29.147 [2024-10-15 04:43:18.448080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.323 ms 00:19:29.147 [2024-10-15 04:43:18.448094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.448324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.448338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:29.147 [2024-10-15 04:43:18.448356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:19:29.147 [2024-10-15 04:43:18.448366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.486136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.486196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:29.147 [2024-10-15 04:43:18.486220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.785 ms 00:19:29.147 [2024-10-15 04:43:18.486232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.524543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.524608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:29.147 [2024-10-15 04:43:18.524628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.278 ms 00:19:29.147 [2024-10-15 04:43:18.524638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.525513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.525545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:29.147 [2024-10-15 04:43:18.525561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:19:29.147 [2024-10-15 04:43:18.525572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.147 [2024-10-15 04:43:18.630512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.147 [2024-10-15 04:43:18.630780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:29.147 [2024-10-15 04:43:18.630837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.058 ms 00:19:29.147 [2024-10-15 04:43:18.630850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.407 [2024-10-15 04:43:18.670495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.407 [2024-10-15 04:43:18.670754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:29.407 [2024-10-15 04:43:18.670785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.563 ms 00:19:29.407 [2024-10-15 04:43:18.670797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.407 [2024-10-15 04:43:18.711755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.407 [2024-10-15 04:43:18.711830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:29.407 [2024-10-15 04:43:18.711851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.844 ms 00:19:29.407 [2024-10-15 04:43:18.711879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.407 [2024-10-15 04:43:18.753333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.407 [2024-10-15 04:43:18.753570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:29.407 [2024-10-15 04:43:18.753615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.387 ms 00:19:29.407 [2024-10-15 04:43:18.753644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.407 [2024-10-15 04:43:18.753791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.407 [2024-10-15 04:43:18.753806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:29.407 [2024-10-15 04:43:18.753825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:29.407 [2024-10-15 04:43:18.753875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.407 [2024-10-15 04:43:18.753971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.407 [2024-10-15 04:43:18.753984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:29.407 [2024-10-15 04:43:18.753998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:29.407 [2024-10-15 04:43:18.754010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.407 [2024-10-15 04:43:18.755090] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:29.407 [2024-10-15 04:43:18.760120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3550.065 ms, result 0 00:19:29.407 [2024-10-15 04:43:18.761135] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:29.407 { 00:19:29.407 "name": "ftl0", 00:19:29.407 "uuid": "88ef924c-9641-4650-a021-afc46cb8fbb7" 00:19:29.407 } 00:19:29.407 04:43:18 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:29.407 04:43:18 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:19:29.407 04:43:18 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:29.407 04:43:18 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:19:29.407 04:43:18 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:29.407 04:43:18 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:29.407 04:43:18 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:29.667 04:43:19 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:29.926 [ 00:19:29.926 { 00:19:29.926 "name": "ftl0", 00:19:29.926 "aliases": [ 00:19:29.926 "88ef924c-9641-4650-a021-afc46cb8fbb7" 00:19:29.926 ], 00:19:29.927 "product_name": "FTL disk", 00:19:29.927 "block_size": 4096, 00:19:29.927 "num_blocks": 23592960, 00:19:29.927 "uuid": "88ef924c-9641-4650-a021-afc46cb8fbb7", 00:19:29.927 "assigned_rate_limits": { 00:19:29.927 "rw_ios_per_sec": 0, 00:19:29.927 "rw_mbytes_per_sec": 0, 00:19:29.927 "r_mbytes_per_sec": 0, 00:19:29.927 "w_mbytes_per_sec": 0 00:19:29.927 }, 00:19:29.927 "claimed": false, 00:19:29.927 "zoned": false, 00:19:29.927 "supported_io_types": { 00:19:29.927 "read": true, 00:19:29.927 "write": true, 00:19:29.927 "unmap": true, 00:19:29.927 "flush": true, 00:19:29.927 "reset": false, 00:19:29.927 "nvme_admin": false, 00:19:29.927 "nvme_io": false, 00:19:29.927 "nvme_io_md": false, 00:19:29.927 "write_zeroes": true, 00:19:29.927 "zcopy": false, 00:19:29.927 "get_zone_info": false, 00:19:29.927 "zone_management": false, 00:19:29.927 "zone_append": false, 00:19:29.927 "compare": false, 00:19:29.927 "compare_and_write": false, 00:19:29.927 "abort": false, 00:19:29.927 "seek_hole": false, 00:19:29.927 "seek_data": false, 00:19:29.927 "copy": false, 00:19:29.927 "nvme_iov_md": false 00:19:29.927 }, 00:19:29.927 "driver_specific": { 00:19:29.927 "ftl": { 00:19:29.927 "base_bdev": "acc6871d-fef9-47a2-9165-c122826caef8", 00:19:29.927 "cache": "nvc0n1p0" 00:19:29.927 } 00:19:29.927 } 00:19:29.927 } 00:19:29.927 ] 00:19:29.927 04:43:19 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:19:29.927 04:43:19 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:29.927 04:43:19 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:29.927 04:43:19 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:29.927 04:43:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:30.186 04:43:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:30.186 { 00:19:30.186 "name": "ftl0", 00:19:30.186 "aliases": [ 00:19:30.186 "88ef924c-9641-4650-a021-afc46cb8fbb7" 00:19:30.186 ], 00:19:30.186 "product_name": "FTL disk", 00:19:30.186 "block_size": 4096, 00:19:30.186 "num_blocks": 23592960, 00:19:30.186 "uuid": "88ef924c-9641-4650-a021-afc46cb8fbb7", 00:19:30.186 "assigned_rate_limits": { 00:19:30.186 "rw_ios_per_sec": 0, 00:19:30.186 "rw_mbytes_per_sec": 0, 00:19:30.186 "r_mbytes_per_sec": 0, 00:19:30.186 "w_mbytes_per_sec": 0 00:19:30.186 }, 00:19:30.186 "claimed": false, 00:19:30.186 "zoned": false, 00:19:30.186 "supported_io_types": { 00:19:30.186 "read": true, 00:19:30.186 "write": true, 00:19:30.186 "unmap": true, 00:19:30.186 "flush": true, 00:19:30.186 "reset": false, 00:19:30.186 "nvme_admin": false, 00:19:30.186 "nvme_io": false, 00:19:30.186 "nvme_io_md": false, 00:19:30.186 "write_zeroes": true, 00:19:30.186 "zcopy": false, 00:19:30.186 "get_zone_info": false, 00:19:30.186 "zone_management": false, 00:19:30.186 "zone_append": false, 00:19:30.186 "compare": false, 00:19:30.186 "compare_and_write": false, 00:19:30.186 "abort": false, 00:19:30.186 "seek_hole": false, 00:19:30.186 "seek_data": false, 00:19:30.186 "copy": false, 00:19:30.186 "nvme_iov_md": false 00:19:30.186 }, 00:19:30.186 "driver_specific": { 00:19:30.186 "ftl": { 00:19:30.186 "base_bdev": "acc6871d-fef9-47a2-9165-c122826caef8", 00:19:30.186 "cache": "nvc0n1p0" 00:19:30.186 } 00:19:30.186 } 00:19:30.186 } 00:19:30.186 ]' 00:19:30.186 04:43:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:30.186 04:43:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:30.186 04:43:19 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:30.444 [2024-10-15 04:43:19.866200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.444 [2024-10-15 04:43:19.866262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:30.444 [2024-10-15 04:43:19.866279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:30.444 [2024-10-15 04:43:19.866293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.444 [2024-10-15 04:43:19.866328] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:30.444 [2024-10-15 04:43:19.870453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.444 [2024-10-15 04:43:19.870499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:30.444 [2024-10-15 04:43:19.870521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.106 ms 00:19:30.444 [2024-10-15 04:43:19.870532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.444 [2024-10-15 04:43:19.871069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.444 [2024-10-15 04:43:19.871094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:30.444 [2024-10-15 04:43:19.871109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:19:30.444 [2024-10-15 04:43:19.871119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.444 [2024-10-15 04:43:19.873953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.444 [2024-10-15 04:43:19.873976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:30.444 [2024-10-15 04:43:19.873990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.804 ms 00:19:30.444 [2024-10-15 04:43:19.874003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.444 [2024-10-15 04:43:19.879640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.444 [2024-10-15 04:43:19.879679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:30.444 [2024-10-15 04:43:19.879695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.605 ms 00:19:30.444 [2024-10-15 04:43:19.879705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.444 [2024-10-15 04:43:19.918074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.444 [2024-10-15 04:43:19.918138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:30.444 [2024-10-15 04:43:19.918162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.256 ms 00:19:30.444 [2024-10-15 04:43:19.918173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.444 [2024-10-15 04:43:19.942350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.444 [2024-10-15 04:43:19.942405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:30.444 [2024-10-15 04:43:19.942426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.099 ms 00:19:30.444 [2024-10-15 04:43:19.942437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.444 [2024-10-15 04:43:19.942692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.444 [2024-10-15 04:43:19.942711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:30.444 [2024-10-15 04:43:19.942726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:19:30.444 [2024-10-15 04:43:19.942738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.704 [2024-10-15 04:43:19.982459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.704 [2024-10-15 04:43:19.982529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:30.704 [2024-10-15 04:43:19.982561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.745 ms 00:19:30.704 [2024-10-15 04:43:19.982572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.704 [2024-10-15 04:43:20.020072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.704 [2024-10-15 04:43:20.020285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:30.704 [2024-10-15 04:43:20.020321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.430 ms 00:19:30.704 [2024-10-15 04:43:20.020334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.704 [2024-10-15 04:43:20.058959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.704 [2024-10-15 04:43:20.059013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:30.704 [2024-10-15 04:43:20.059032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.562 ms 00:19:30.704 [2024-10-15 04:43:20.059042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.704 [2024-10-15 04:43:20.097018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.704 [2024-10-15 04:43:20.097233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:30.704 [2024-10-15 04:43:20.097263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.867 ms 00:19:30.704 [2024-10-15 04:43:20.097274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.704 [2024-10-15 04:43:20.097416] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:30.704 [2024-10-15 04:43:20.097437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.097995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:30.704 [2024-10-15 04:43:20.098546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:30.705 [2024-10-15 04:43:20.098812] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:30.705 [2024-10-15 04:43:20.098836] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88ef924c-9641-4650-a021-afc46cb8fbb7 00:19:30.705 [2024-10-15 04:43:20.098848] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:30.705 [2024-10-15 04:43:20.098861] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:30.705 [2024-10-15 04:43:20.098870] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:30.705 [2024-10-15 04:43:20.098883] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:30.705 [2024-10-15 04:43:20.098893] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:30.705 [2024-10-15 04:43:20.098906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:30.705 [2024-10-15 04:43:20.098919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:30.705 [2024-10-15 04:43:20.098930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:30.705 [2024-10-15 04:43:20.098940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:30.705 [2024-10-15 04:43:20.098952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.705 [2024-10-15 04:43:20.098963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:30.705 [2024-10-15 04:43:20.098977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:19:30.705 [2024-10-15 04:43:20.098987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.705 [2024-10-15 04:43:20.119441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.705 [2024-10-15 04:43:20.119596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:30.705 [2024-10-15 04:43:20.119625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.448 ms 00:19:30.705 [2024-10-15 04:43:20.119636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.705 [2024-10-15 04:43:20.120281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.705 [2024-10-15 04:43:20.120299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:30.705 [2024-10-15 04:43:20.120313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:19:30.705 [2024-10-15 04:43:20.120323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.705 [2024-10-15 04:43:20.191519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.705 [2024-10-15 04:43:20.191582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.705 [2024-10-15 04:43:20.191600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.705 [2024-10-15 04:43:20.191615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.705 [2024-10-15 04:43:20.191776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.705 [2024-10-15 04:43:20.191790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.705 [2024-10-15 04:43:20.191803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.705 [2024-10-15 04:43:20.191840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.705 [2024-10-15 04:43:20.191919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.705 [2024-10-15 04:43:20.191932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.705 [2024-10-15 04:43:20.191949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.705 [2024-10-15 04:43:20.191960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.705 [2024-10-15 04:43:20.191997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.705 [2024-10-15 04:43:20.192009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.705 [2024-10-15 04:43:20.192021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.705 [2024-10-15 04:43:20.192031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.964 [2024-10-15 04:43:20.325139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.964 [2024-10-15 04:43:20.325378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.964 [2024-10-15 04:43:20.325409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.964 [2024-10-15 04:43:20.325420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.964 [2024-10-15 04:43:20.428211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.964 [2024-10-15 04:43:20.428445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.964 [2024-10-15 04:43:20.428475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.964 [2024-10-15 04:43:20.428486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.964 [2024-10-15 04:43:20.428616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.964 [2024-10-15 04:43:20.428629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.964 [2024-10-15 04:43:20.428663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.964 [2024-10-15 04:43:20.428674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.964 [2024-10-15 04:43:20.428732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.964 [2024-10-15 04:43:20.428746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.964 [2024-10-15 04:43:20.428758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.964 [2024-10-15 04:43:20.428768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.964 [2024-10-15 04:43:20.428953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.964 [2024-10-15 04:43:20.428969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.964 [2024-10-15 04:43:20.428983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.964 [2024-10-15 04:43:20.428994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.964 [2024-10-15 04:43:20.429056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.964 [2024-10-15 04:43:20.429070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:30.964 [2024-10-15 04:43:20.429086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.964 [2024-10-15 04:43:20.429097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.964 [2024-10-15 04:43:20.429149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.964 [2024-10-15 04:43:20.429161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.964 [2024-10-15 04:43:20.429177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.964 [2024-10-15 04:43:20.429190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.964 [2024-10-15 04:43:20.429263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.964 [2024-10-15 04:43:20.429279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.964 [2024-10-15 04:43:20.429292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.964 [2024-10-15 04:43:20.429302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.964 [2024-10-15 04:43:20.429492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 564.191 ms, result 0 00:19:30.964 true 00:19:30.964 04:43:20 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75692 00:19:30.964 04:43:20 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75692 ']' 00:19:30.964 04:43:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75692 00:19:30.964 04:43:20 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:19:30.964 04:43:20 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.223 04:43:20 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75692 00:19:31.223 killing process with pid 75692 00:19:31.223 04:43:20 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:31.223 04:43:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:31.223 04:43:20 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75692' 00:19:31.223 04:43:20 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 75692 00:19:31.223 04:43:20 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 75692 00:19:36.504 04:43:25 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:37.072 65536+0 records in 00:19:37.072 65536+0 records out 00:19:37.072 268435456 bytes (268 MB, 256 MiB) copied, 1.05508 s, 254 MB/s 00:19:37.072 04:43:26 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:37.331 [2024-10-15 04:43:26.600624] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:19:37.331 [2024-10-15 04:43:26.601033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75895 ] 00:19:37.331 [2024-10-15 04:43:26.775397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.590 [2024-10-15 04:43:26.892679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.849 [2024-10-15 04:43:27.265632] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.849 [2024-10-15 04:43:27.265708] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.147 [2024-10-15 04:43:27.428788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.147 [2024-10-15 04:43:27.428874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.147 [2024-10-15 04:43:27.428901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:38.147 [2024-10-15 04:43:27.428917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.147 [2024-10-15 04:43:27.432532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.147 [2024-10-15 04:43:27.432581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.147 [2024-10-15 04:43:27.432598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.583 ms 00:19:38.147 [2024-10-15 04:43:27.432623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.147 [2024-10-15 04:43:27.432745] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.147 [2024-10-15 04:43:27.434097] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.147 [2024-10-15 04:43:27.434277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.147 [2024-10-15 04:43:27.434358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.147 [2024-10-15 04:43:27.434378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.544 ms 00:19:38.147 [2024-10-15 04:43:27.434391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.147 [2024-10-15 04:43:27.436048] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:38.147 [2024-10-15 04:43:27.455198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.147 [2024-10-15 04:43:27.455257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:38.147 [2024-10-15 04:43:27.455282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.180 ms 00:19:38.147 [2024-10-15 04:43:27.455295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.147 [2024-10-15 04:43:27.455428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.147 [2024-10-15 04:43:27.455445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:38.147 [2024-10-15 04:43:27.455459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:38.147 [2024-10-15 04:43:27.455472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.147 [2024-10-15 04:43:27.462391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.147 [2024-10-15 04:43:27.462440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.147 [2024-10-15 04:43:27.462453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.878 ms 00:19:38.147 [2024-10-15 04:43:27.462464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.147 [2024-10-15 04:43:27.462584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.147 [2024-10-15 04:43:27.462600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.147 [2024-10-15 04:43:27.462610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:38.147 [2024-10-15 04:43:27.462620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.147 [2024-10-15 04:43:27.462651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.147 [2024-10-15 04:43:27.462662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.147 [2024-10-15 04:43:27.462676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:38.147 [2024-10-15 04:43:27.462686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.147 [2024-10-15 04:43:27.462711] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:38.147 [2024-10-15 04:43:27.467613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.147 [2024-10-15 04:43:27.467647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.147 [2024-10-15 04:43:27.467660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.917 ms 00:19:38.147 [2024-10-15 04:43:27.467670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.147 [2024-10-15 04:43:27.467742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.147 [2024-10-15 04:43:27.467755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.147 [2024-10-15 04:43:27.467766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:38.147 [2024-10-15 04:43:27.467776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.147 [2024-10-15 04:43:27.467799] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:38.147 [2024-10-15 04:43:27.467838] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:38.147 [2024-10-15 04:43:27.467877] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:38.147 [2024-10-15 04:43:27.467895] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:38.147 [2024-10-15 04:43:27.467985] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:38.147 [2024-10-15 04:43:27.467998] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.147 [2024-10-15 04:43:27.468011] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:38.147 [2024-10-15 04:43:27.468024] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.147 [2024-10-15 04:43:27.468036] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.147 [2024-10-15 04:43:27.468047] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:38.147 [2024-10-15 04:43:27.468060] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.147 [2024-10-15 04:43:27.468070] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:38.147 [2024-10-15 04:43:27.468080] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:38.147 [2024-10-15 04:43:27.468091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.147 [2024-10-15 04:43:27.468101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.147 [2024-10-15 04:43:27.468112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:19:38.147 [2024-10-15 04:43:27.468121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.147 [2024-10-15 04:43:27.468199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.148 [2024-10-15 04:43:27.468210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.148 [2024-10-15 04:43:27.468220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:38.148 [2024-10-15 04:43:27.468234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.148 [2024-10-15 04:43:27.468322] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.148 [2024-10-15 04:43:27.468335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.148 [2024-10-15 04:43:27.468346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.148 [2024-10-15 04:43:27.468356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.148 [2024-10-15 04:43:27.468376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:38.148 [2024-10-15 04:43:27.468395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.148 [2024-10-15 04:43:27.468406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.148 [2024-10-15 04:43:27.468425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.148 [2024-10-15 04:43:27.468435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:38.148 [2024-10-15 04:43:27.468445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.148 [2024-10-15 04:43:27.468465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.148 [2024-10-15 04:43:27.468474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:38.148 [2024-10-15 04:43:27.468484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.148 [2024-10-15 04:43:27.468502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:38.148 [2024-10-15 04:43:27.468511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.148 [2024-10-15 04:43:27.468530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.148 [2024-10-15 04:43:27.468548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.148 [2024-10-15 04:43:27.468557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.148 [2024-10-15 04:43:27.468575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.148 [2024-10-15 04:43:27.468585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.148 [2024-10-15 04:43:27.468602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.148 [2024-10-15 04:43:27.468611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.148 [2024-10-15 04:43:27.468629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.148 [2024-10-15 04:43:27.468638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.148 [2024-10-15 04:43:27.468655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.148 [2024-10-15 04:43:27.468664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:38.148 [2024-10-15 04:43:27.468672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.148 [2024-10-15 04:43:27.468681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:38.148 [2024-10-15 04:43:27.468690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:38.148 [2024-10-15 04:43:27.468699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:38.148 [2024-10-15 04:43:27.468717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:38.148 [2024-10-15 04:43:27.468727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468736] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.148 [2024-10-15 04:43:27.468747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.148 [2024-10-15 04:43:27.468756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.148 [2024-10-15 04:43:27.468766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.148 [2024-10-15 04:43:27.468776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.148 [2024-10-15 04:43:27.468785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.148 [2024-10-15 04:43:27.468795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.148 [2024-10-15 04:43:27.468804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.148 [2024-10-15 04:43:27.468825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.148 [2024-10-15 04:43:27.468835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.148 [2024-10-15 04:43:27.468846] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.148 [2024-10-15 04:43:27.468861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.148 [2024-10-15 04:43:27.468873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:38.148 [2024-10-15 04:43:27.468884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:38.148 [2024-10-15 04:43:27.468894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:38.148 [2024-10-15 04:43:27.468905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:38.148 [2024-10-15 04:43:27.468915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:38.148 [2024-10-15 04:43:27.468926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:38.148 [2024-10-15 04:43:27.468936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:38.148 [2024-10-15 04:43:27.468946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:38.148 [2024-10-15 04:43:27.468956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:38.148 [2024-10-15 04:43:27.468966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:38.148 [2024-10-15 04:43:27.468976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:38.148 [2024-10-15 04:43:27.468986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:38.148 [2024-10-15 04:43:27.468996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:38.148 [2024-10-15 04:43:27.469006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:38.148 [2024-10-15 04:43:27.469016] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.148 [2024-10-15 04:43:27.469028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.148 [2024-10-15 04:43:27.469040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.148 [2024-10-15 04:43:27.469050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.148 [2024-10-15 04:43:27.469059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.148 [2024-10-15 04:43:27.469069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.148 [2024-10-15 04:43:27.469080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.148 [2024-10-15 04:43:27.469091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.148 [2024-10-15 04:43:27.469101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:19:38.148 [2024-10-15 04:43:27.469114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.148 [2024-10-15 04:43:27.510758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.148 [2024-10-15 04:43:27.511060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.148 [2024-10-15 04:43:27.511179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.650 ms 00:19:38.149 [2024-10-15 04:43:27.511232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.149 [2024-10-15 04:43:27.511499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.149 [2024-10-15 04:43:27.511553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:38.149 [2024-10-15 04:43:27.511680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:38.149 [2024-10-15 04:43:27.511829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.149 [2024-10-15 04:43:27.573114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.149 [2024-10-15 04:43:27.573369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.149 [2024-10-15 04:43:27.573549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.275 ms 00:19:38.149 [2024-10-15 04:43:27.573590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.149 [2024-10-15 04:43:27.573763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.149 [2024-10-15 04:43:27.573811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.149 [2024-10-15 04:43:27.573863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:38.149 [2024-10-15 04:43:27.573949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.149 [2024-10-15 04:43:27.574437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.149 [2024-10-15 04:43:27.574547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.149 [2024-10-15 04:43:27.574615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:19:38.149 [2024-10-15 04:43:27.574650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.149 [2024-10-15 04:43:27.574854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.149 [2024-10-15 04:43:27.574964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.149 [2024-10-15 04:43:27.575050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:19:38.149 [2024-10-15 04:43:27.575086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.149 [2024-10-15 04:43:27.595760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.149 [2024-10-15 04:43:27.595925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.149 [2024-10-15 04:43:27.596008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.651 ms 00:19:38.149 [2024-10-15 04:43:27.596046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.149 [2024-10-15 04:43:27.615598] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:38.149 [2024-10-15 04:43:27.615835] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:38.149 [2024-10-15 04:43:27.616014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.149 [2024-10-15 04:43:27.616109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:38.149 [2024-10-15 04:43:27.616130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.839 ms 00:19:38.149 [2024-10-15 04:43:27.616141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.647553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.647747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:38.427 [2024-10-15 04:43:27.647792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.364 ms 00:19:38.427 [2024-10-15 04:43:27.647803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.667462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.667512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:38.427 [2024-10-15 04:43:27.667527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.512 ms 00:19:38.427 [2024-10-15 04:43:27.667537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.686087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.686131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:38.427 [2024-10-15 04:43:27.686146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.485 ms 00:19:38.427 [2024-10-15 04:43:27.686174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.687074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.687113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:38.427 [2024-10-15 04:43:27.687131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:19:38.427 [2024-10-15 04:43:27.687141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.775010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.775099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:38.427 [2024-10-15 04:43:27.775123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.976 ms 00:19:38.427 [2024-10-15 04:43:27.775140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.788350] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:38.427 [2024-10-15 04:43:27.805382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.805443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:38.427 [2024-10-15 04:43:27.805460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.119 ms 00:19:38.427 [2024-10-15 04:43:27.805472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.805615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.805630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:38.427 [2024-10-15 04:43:27.805646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:38.427 [2024-10-15 04:43:27.805657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.805713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.805724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:38.427 [2024-10-15 04:43:27.805735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:38.427 [2024-10-15 04:43:27.805745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.805776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.805791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:38.427 [2024-10-15 04:43:27.805807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:38.427 [2024-10-15 04:43:27.805845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.805883] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:38.427 [2024-10-15 04:43:27.805910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.805921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:38.427 [2024-10-15 04:43:27.805932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:38.427 [2024-10-15 04:43:27.805942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.845158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.845231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:38.427 [2024-10-15 04:43:27.845257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.256 ms 00:19:38.427 [2024-10-15 04:43:27.845268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.845416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.427 [2024-10-15 04:43:27.845431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:38.427 [2024-10-15 04:43:27.845442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:38.427 [2024-10-15 04:43:27.845452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.427 [2024-10-15 04:43:27.846467] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.427 [2024-10-15 04:43:27.851358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 418.054 ms, result 0 00:19:38.427 [2024-10-15 04:43:27.852266] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:38.427 [2024-10-15 04:43:27.871218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:39.805  [2024-10-15T04:43:29.876Z] Copying: 28/256 [MB] (28 MBps) [2024-10-15T04:43:31.253Z] Copying: 57/256 [MB] (28 MBps) [2024-10-15T04:43:32.189Z] Copying: 87/256 [MB] (30 MBps) [2024-10-15T04:43:33.125Z] Copying: 115/256 [MB] (28 MBps) [2024-10-15T04:43:34.061Z] Copying: 147/256 [MB] (31 MBps) [2024-10-15T04:43:34.998Z] Copying: 177/256 [MB] (29 MBps) [2024-10-15T04:43:35.948Z] Copying: 207/256 [MB] (30 MBps) [2024-10-15T04:43:36.885Z] Copying: 237/256 [MB] (29 MBps) [2024-10-15T04:43:36.885Z] Copying: 256/256 [MB] (average 29 MBps)[2024-10-15 04:43:36.555117] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.381 [2024-10-15 04:43:36.570077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.381 [2024-10-15 04:43:36.570123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:47.381 [2024-10-15 04:43:36.570139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:47.381 [2024-10-15 04:43:36.570150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.381 [2024-10-15 04:43:36.570175] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:47.381 [2024-10-15 04:43:36.574525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.381 [2024-10-15 04:43:36.574557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:47.381 [2024-10-15 04:43:36.574577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.341 ms 00:19:47.381 [2024-10-15 04:43:36.574588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.381 [2024-10-15 04:43:36.576543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.381 [2024-10-15 04:43:36.576585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:47.382 [2024-10-15 04:43:36.576598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:19:47.382 [2024-10-15 04:43:36.576608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.382 [2024-10-15 04:43:36.583169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.382 [2024-10-15 04:43:36.583207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:47.382 [2024-10-15 04:43:36.583219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.551 ms 00:19:47.382 [2024-10-15 04:43:36.583236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.382 [2024-10-15 04:43:36.588889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.382 [2024-10-15 04:43:36.589043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:47.382 [2024-10-15 04:43:36.589063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.608 ms 00:19:47.382 [2024-10-15 04:43:36.589074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.382 [2024-10-15 04:43:36.625006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.382 [2024-10-15 04:43:36.625051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:47.382 [2024-10-15 04:43:36.625066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.935 ms 00:19:47.382 [2024-10-15 04:43:36.625077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.382 [2024-10-15 04:43:36.646173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.382 [2024-10-15 04:43:36.646217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:47.382 [2024-10-15 04:43:36.646232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.073 ms 00:19:47.382 [2024-10-15 04:43:36.646242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.382 [2024-10-15 04:43:36.646385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.382 [2024-10-15 04:43:36.646399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:47.382 [2024-10-15 04:43:36.646409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:47.382 [2024-10-15 04:43:36.646419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.382 [2024-10-15 04:43:36.684833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.382 [2024-10-15 04:43:36.684898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:47.382 [2024-10-15 04:43:36.684914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.454 ms 00:19:47.382 [2024-10-15 04:43:36.684924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.382 [2024-10-15 04:43:36.722025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.382 [2024-10-15 04:43:36.722074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:47.382 [2024-10-15 04:43:36.722089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.072 ms 00:19:47.382 [2024-10-15 04:43:36.722099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.382 [2024-10-15 04:43:36.758486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.382 [2024-10-15 04:43:36.758533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:47.382 [2024-10-15 04:43:36.758548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.380 ms 00:19:47.382 [2024-10-15 04:43:36.758557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.382 [2024-10-15 04:43:36.795081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.382 [2024-10-15 04:43:36.795129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:47.382 [2024-10-15 04:43:36.795144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.492 ms 00:19:47.382 [2024-10-15 04:43:36.795154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.382 [2024-10-15 04:43:36.795215] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:47.382 [2024-10-15 04:43:36.795233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:47.382 [2024-10-15 04:43:36.795880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.795891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.795901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.795912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.795922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.795948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.795959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.795969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.795980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.795992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:47.383 [2024-10-15 04:43:36.796337] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:47.383 [2024-10-15 04:43:36.796355] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88ef924c-9641-4650-a021-afc46cb8fbb7 00:19:47.383 [2024-10-15 04:43:36.796366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:47.383 [2024-10-15 04:43:36.796375] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:47.383 [2024-10-15 04:43:36.796385] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:47.383 [2024-10-15 04:43:36.796395] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:47.383 [2024-10-15 04:43:36.796405] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:47.383 [2024-10-15 04:43:36.796414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:47.383 [2024-10-15 04:43:36.796424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:47.383 [2024-10-15 04:43:36.796433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:47.383 [2024-10-15 04:43:36.796442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:47.383 [2024-10-15 04:43:36.796452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.383 [2024-10-15 04:43:36.796462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:47.383 [2024-10-15 04:43:36.796473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:19:47.383 [2024-10-15 04:43:36.796482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.383 [2024-10-15 04:43:36.816789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.383 [2024-10-15 04:43:36.816860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:47.383 [2024-10-15 04:43:36.816875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.312 ms 00:19:47.383 [2024-10-15 04:43:36.816886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.383 [2024-10-15 04:43:36.817495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.383 [2024-10-15 04:43:36.817520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:47.383 [2024-10-15 04:43:36.817539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:19:47.383 [2024-10-15 04:43:36.817549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.383 [2024-10-15 04:43:36.873113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.383 [2024-10-15 04:43:36.873175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.383 [2024-10-15 04:43:36.873191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.383 [2024-10-15 04:43:36.873201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.383 [2024-10-15 04:43:36.873352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.383 [2024-10-15 04:43:36.873367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.383 [2024-10-15 04:43:36.873388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.383 [2024-10-15 04:43:36.873399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.383 [2024-10-15 04:43:36.873453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.383 [2024-10-15 04:43:36.873466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.383 [2024-10-15 04:43:36.873477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.383 [2024-10-15 04:43:36.873487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.383 [2024-10-15 04:43:36.873506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.383 [2024-10-15 04:43:36.873517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.383 [2024-10-15 04:43:36.873527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.383 [2024-10-15 04:43:36.873541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.642 [2024-10-15 04:43:36.996534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.642 [2024-10-15 04:43:36.996591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.642 [2024-10-15 04:43:36.996608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.642 [2024-10-15 04:43:36.996620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.642 [2024-10-15 04:43:37.097159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.642 [2024-10-15 04:43:37.097223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.642 [2024-10-15 04:43:37.097242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.642 [2024-10-15 04:43:37.097262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.642 [2024-10-15 04:43:37.097359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.643 [2024-10-15 04:43:37.097375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.643 [2024-10-15 04:43:37.097388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.643 [2024-10-15 04:43:37.097401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.643 [2024-10-15 04:43:37.097434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.643 [2024-10-15 04:43:37.097447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.643 [2024-10-15 04:43:37.097461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.643 [2024-10-15 04:43:37.097473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.643 [2024-10-15 04:43:37.097592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.643 [2024-10-15 04:43:37.097608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.643 [2024-10-15 04:43:37.097621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.643 [2024-10-15 04:43:37.097633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.643 [2024-10-15 04:43:37.097674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.643 [2024-10-15 04:43:37.097689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:47.643 [2024-10-15 04:43:37.097706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.643 [2024-10-15 04:43:37.097723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.643 [2024-10-15 04:43:37.097767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.643 [2024-10-15 04:43:37.097778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.643 [2024-10-15 04:43:37.097788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.643 [2024-10-15 04:43:37.097798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.643 [2024-10-15 04:43:37.097867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.643 [2024-10-15 04:43:37.097880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.643 [2024-10-15 04:43:37.097891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.643 [2024-10-15 04:43:37.097901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.643 [2024-10-15 04:43:37.098052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 528.835 ms, result 0 00:19:49.022 00:19:49.022 00:19:49.022 04:43:38 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76017 00:19:49.022 04:43:38 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76017 00:19:49.022 04:43:38 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:49.022 04:43:38 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76017 ']' 00:19:49.022 04:43:38 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.022 04:43:38 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:49.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.022 04:43:38 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.022 04:43:38 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:49.022 04:43:38 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:49.022 [2024-10-15 04:43:38.478536] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:19:49.022 [2024-10-15 04:43:38.479108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76017 ] 00:19:49.279 [2024-10-15 04:43:38.651112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.279 [2024-10-15 04:43:38.771464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.212 04:43:39 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.212 04:43:39 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:50.212 04:43:39 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:50.478 [2024-10-15 04:43:39.880181] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.478 [2024-10-15 04:43:39.880263] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.738 [2024-10-15 04:43:40.064206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.064470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:50.738 [2024-10-15 04:43:40.064511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:50.738 [2024-10-15 04:43:40.064524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.068724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.068772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.738 [2024-10-15 04:43:40.068791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.162 ms 00:19:50.738 [2024-10-15 04:43:40.068805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.068950] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:50.738 [2024-10-15 04:43:40.070009] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:50.738 [2024-10-15 04:43:40.070053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.070066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.738 [2024-10-15 04:43:40.070079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:19:50.738 [2024-10-15 04:43:40.070089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.071725] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:50.738 [2024-10-15 04:43:40.092200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.092269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:50.738 [2024-10-15 04:43:40.092284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.521 ms 00:19:50.738 [2024-10-15 04:43:40.092298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.092417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.092439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:50.738 [2024-10-15 04:43:40.092451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:50.738 [2024-10-15 04:43:40.092464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.099441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.099625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.738 [2024-10-15 04:43:40.099651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.936 ms 00:19:50.738 [2024-10-15 04:43:40.099664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.099812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.099876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.738 [2024-10-15 04:43:40.099889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:50.738 [2024-10-15 04:43:40.099902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.099935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.099954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:50.738 [2024-10-15 04:43:40.099964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:50.738 [2024-10-15 04:43:40.099977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.100004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:50.738 [2024-10-15 04:43:40.105002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.105039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.738 [2024-10-15 04:43:40.105057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.008 ms 00:19:50.738 [2024-10-15 04:43:40.105070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.105148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.105163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:50.738 [2024-10-15 04:43:40.105179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:50.738 [2024-10-15 04:43:40.105193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.105232] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:50.738 [2024-10-15 04:43:40.105258] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:50.738 [2024-10-15 04:43:40.105309] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:50.738 [2024-10-15 04:43:40.105330] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:50.738 [2024-10-15 04:43:40.105427] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:50.738 [2024-10-15 04:43:40.105441] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:50.738 [2024-10-15 04:43:40.105456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:50.738 [2024-10-15 04:43:40.105470] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:50.738 [2024-10-15 04:43:40.105488] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:50.738 [2024-10-15 04:43:40.105499] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:50.738 [2024-10-15 04:43:40.105511] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:50.738 [2024-10-15 04:43:40.105521] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:50.738 [2024-10-15 04:43:40.105536] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:50.738 [2024-10-15 04:43:40.105547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.105560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:50.738 [2024-10-15 04:43:40.105570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:50.738 [2024-10-15 04:43:40.105582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.105658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.738 [2024-10-15 04:43:40.105671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:50.738 [2024-10-15 04:43:40.105684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:50.738 [2024-10-15 04:43:40.105697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.738 [2024-10-15 04:43:40.105785] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:50.738 [2024-10-15 04:43:40.105800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:50.738 [2024-10-15 04:43:40.105810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.738 [2024-10-15 04:43:40.105844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.738 [2024-10-15 04:43:40.105855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:50.738 [2024-10-15 04:43:40.105866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:50.738 [2024-10-15 04:43:40.105876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:50.738 [2024-10-15 04:43:40.105893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:50.738 [2024-10-15 04:43:40.105903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:50.738 [2024-10-15 04:43:40.105915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.738 [2024-10-15 04:43:40.105925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:50.738 [2024-10-15 04:43:40.105955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:50.738 [2024-10-15 04:43:40.105965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.738 [2024-10-15 04:43:40.105977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:50.738 [2024-10-15 04:43:40.105986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:50.738 [2024-10-15 04:43:40.105997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.738 [2024-10-15 04:43:40.106007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:50.738 [2024-10-15 04:43:40.106018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:50.738 [2024-10-15 04:43:40.106027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.738 [2024-10-15 04:43:40.106039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:50.738 [2024-10-15 04:43:40.106058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:50.738 [2024-10-15 04:43:40.106070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.738 [2024-10-15 04:43:40.106079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:50.738 [2024-10-15 04:43:40.106093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:50.739 [2024-10-15 04:43:40.106102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.739 [2024-10-15 04:43:40.106114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:50.739 [2024-10-15 04:43:40.106123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:50.739 [2024-10-15 04:43:40.106134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.739 [2024-10-15 04:43:40.106144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:50.739 [2024-10-15 04:43:40.106155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:50.739 [2024-10-15 04:43:40.106164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.739 [2024-10-15 04:43:40.106177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:50.739 [2024-10-15 04:43:40.106186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:50.739 [2024-10-15 04:43:40.106197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.739 [2024-10-15 04:43:40.106207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:50.739 [2024-10-15 04:43:40.106218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:50.739 [2024-10-15 04:43:40.106227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.739 [2024-10-15 04:43:40.106238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:50.739 [2024-10-15 04:43:40.106247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:50.739 [2024-10-15 04:43:40.106261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.739 [2024-10-15 04:43:40.106270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:50.739 [2024-10-15 04:43:40.106281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:50.739 [2024-10-15 04:43:40.106290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.739 [2024-10-15 04:43:40.106302] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:50.739 [2024-10-15 04:43:40.106312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:50.739 [2024-10-15 04:43:40.106324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.739 [2024-10-15 04:43:40.106336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.739 [2024-10-15 04:43:40.106349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:50.739 [2024-10-15 04:43:40.106358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:50.739 [2024-10-15 04:43:40.106369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:50.739 [2024-10-15 04:43:40.106378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:50.739 [2024-10-15 04:43:40.106389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:50.739 [2024-10-15 04:43:40.106399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:50.739 [2024-10-15 04:43:40.106412] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:50.739 [2024-10-15 04:43:40.106432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.739 [2024-10-15 04:43:40.106457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:50.739 [2024-10-15 04:43:40.106468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:50.739 [2024-10-15 04:43:40.106481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:50.739 [2024-10-15 04:43:40.106491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:50.739 [2024-10-15 04:43:40.106504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:50.739 [2024-10-15 04:43:40.106518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:50.739 [2024-10-15 04:43:40.106540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:50.739 [2024-10-15 04:43:40.106560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:50.739 [2024-10-15 04:43:40.106582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:50.739 [2024-10-15 04:43:40.106601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:50.739 [2024-10-15 04:43:40.106624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:50.739 [2024-10-15 04:43:40.106638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:50.739 [2024-10-15 04:43:40.106651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:50.739 [2024-10-15 04:43:40.106661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:50.739 [2024-10-15 04:43:40.106674] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:50.739 [2024-10-15 04:43:40.106686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.739 [2024-10-15 04:43:40.106701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:50.739 [2024-10-15 04:43:40.106720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:50.739 [2024-10-15 04:43:40.106740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:50.739 [2024-10-15 04:43:40.106760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:50.739 [2024-10-15 04:43:40.106781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.739 [2024-10-15 04:43:40.106799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:50.739 [2024-10-15 04:43:40.106838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:19:50.739 [2024-10-15 04:43:40.106857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.739 [2024-10-15 04:43:40.148080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.739 [2024-10-15 04:43:40.148317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.739 [2024-10-15 04:43:40.148351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.189 ms 00:19:50.739 [2024-10-15 04:43:40.148363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.739 [2024-10-15 04:43:40.148562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.739 [2024-10-15 04:43:40.148584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:50.739 [2024-10-15 04:43:40.148598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:50.739 [2024-10-15 04:43:40.148609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.739 [2024-10-15 04:43:40.200781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.739 [2024-10-15 04:43:40.200853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.739 [2024-10-15 04:43:40.200877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.219 ms 00:19:50.739 [2024-10-15 04:43:40.200895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.739 [2024-10-15 04:43:40.201020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.739 [2024-10-15 04:43:40.201034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.739 [2024-10-15 04:43:40.201052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:50.739 [2024-10-15 04:43:40.201063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.739 [2024-10-15 04:43:40.201635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.739 [2024-10-15 04:43:40.201675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.739 [2024-10-15 04:43:40.201693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:19:50.739 [2024-10-15 04:43:40.201704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.739 [2024-10-15 04:43:40.201878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.739 [2024-10-15 04:43:40.201893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.739 [2024-10-15 04:43:40.201907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:19:50.739 [2024-10-15 04:43:40.201917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.739 [2024-10-15 04:43:40.224235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.739 [2024-10-15 04:43:40.224277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.739 [2024-10-15 04:43:40.224295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.325 ms 00:19:50.739 [2024-10-15 04:43:40.224305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.999 [2024-10-15 04:43:40.244284] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:50.999 [2024-10-15 04:43:40.244330] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:50.999 [2024-10-15 04:43:40.244360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.999 [2024-10-15 04:43:40.244371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:50.999 [2024-10-15 04:43:40.244392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.956 ms 00:19:50.999 [2024-10-15 04:43:40.244402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.999 [2024-10-15 04:43:40.275961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.999 [2024-10-15 04:43:40.276051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:50.999 [2024-10-15 04:43:40.276074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.479 ms 00:19:50.999 [2024-10-15 04:43:40.276085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.999 [2024-10-15 04:43:40.295471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.999 [2024-10-15 04:43:40.295653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:50.999 [2024-10-15 04:43:40.295708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.250 ms 00:19:51.000 [2024-10-15 04:43:40.295729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.000 [2024-10-15 04:43:40.314198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.000 [2024-10-15 04:43:40.314243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:51.000 [2024-10-15 04:43:40.314263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.320 ms 00:19:51.000 [2024-10-15 04:43:40.314274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.000 [2024-10-15 04:43:40.315094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.000 [2024-10-15 04:43:40.315134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:51.000 [2024-10-15 04:43:40.315160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:19:51.000 [2024-10-15 04:43:40.315174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.000 [2024-10-15 04:43:40.411851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.000 [2024-10-15 04:43:40.411915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:51.000 [2024-10-15 04:43:40.411935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.793 ms 00:19:51.000 [2024-10-15 04:43:40.411947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.000 [2024-10-15 04:43:40.423679] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:51.000 [2024-10-15 04:43:40.440319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.000 [2024-10-15 04:43:40.440374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:51.000 [2024-10-15 04:43:40.440391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.302 ms 00:19:51.000 [2024-10-15 04:43:40.440404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.000 [2024-10-15 04:43:40.440534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.000 [2024-10-15 04:43:40.440552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:51.000 [2024-10-15 04:43:40.440563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:51.000 [2024-10-15 04:43:40.440576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.000 [2024-10-15 04:43:40.440630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.000 [2024-10-15 04:43:40.440645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:51.000 [2024-10-15 04:43:40.440655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:51.000 [2024-10-15 04:43:40.440668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.000 [2024-10-15 04:43:40.440693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.000 [2024-10-15 04:43:40.440709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:51.000 [2024-10-15 04:43:40.440720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:51.000 [2024-10-15 04:43:40.440734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.000 [2024-10-15 04:43:40.440771] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:51.000 [2024-10-15 04:43:40.440792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.000 [2024-10-15 04:43:40.440803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:51.000 [2024-10-15 04:43:40.440837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:51.000 [2024-10-15 04:43:40.440852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.000 [2024-10-15 04:43:40.479704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.000 [2024-10-15 04:43:40.479772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:51.000 [2024-10-15 04:43:40.479792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.879 ms 00:19:51.000 [2024-10-15 04:43:40.479803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.000 [2024-10-15 04:43:40.479992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.000 [2024-10-15 04:43:40.480008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:51.000 [2024-10-15 04:43:40.480022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:51.000 [2024-10-15 04:43:40.480032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.000 [2024-10-15 04:43:40.481122] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:51.000 [2024-10-15 04:43:40.486359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.225 ms, result 0 00:19:51.000 [2024-10-15 04:43:40.487794] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:51.259 Some configs were skipped because the RPC state that can call them passed over. 00:19:51.259 04:43:40 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:51.259 [2024-10-15 04:43:40.755923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.259 [2024-10-15 04:43:40.756199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:51.259 [2024-10-15 04:43:40.756301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.510 ms 00:19:51.259 [2024-10-15 04:43:40.756343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.259 [2024-10-15 04:43:40.756417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.009 ms, result 0 00:19:51.259 true 00:19:51.519 04:43:40 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:51.519 [2024-10-15 04:43:40.971292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.519 [2024-10-15 04:43:40.971356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:51.519 [2024-10-15 04:43:40.971376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:19:51.519 [2024-10-15 04:43:40.971388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.519 [2024-10-15 04:43:40.971433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.166 ms, result 0 00:19:51.519 true 00:19:51.519 04:43:40 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76017 00:19:51.519 04:43:40 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76017 ']' 00:19:51.519 04:43:40 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76017 00:19:51.519 04:43:40 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:19:51.519 04:43:40 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.519 04:43:40 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76017 00:19:51.778 04:43:41 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:51.778 04:43:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:51.778 04:43:41 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76017' 00:19:51.778 killing process with pid 76017 00:19:51.778 04:43:41 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76017 00:19:51.778 04:43:41 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76017 00:19:52.716 [2024-10-15 04:43:42.175334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.716 [2024-10-15 04:43:42.175635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:52.716 [2024-10-15 04:43:42.175666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.716 [2024-10-15 04:43:42.175681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.716 [2024-10-15 04:43:42.175720] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:52.716 [2024-10-15 04:43:42.180342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.716 [2024-10-15 04:43:42.180383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:52.716 [2024-10-15 04:43:42.180407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.603 ms 00:19:52.716 [2024-10-15 04:43:42.180418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.716 [2024-10-15 04:43:42.180714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.716 [2024-10-15 04:43:42.180728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:52.716 [2024-10-15 04:43:42.180741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:19:52.716 [2024-10-15 04:43:42.180751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.716 [2024-10-15 04:43:42.184227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.716 [2024-10-15 04:43:42.184265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:52.716 [2024-10-15 04:43:42.184281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.456 ms 00:19:52.716 [2024-10-15 04:43:42.184295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.716 [2024-10-15 04:43:42.190045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.716 [2024-10-15 04:43:42.190083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:52.716 [2024-10-15 04:43:42.190098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.712 ms 00:19:52.716 [2024-10-15 04:43:42.190108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.716 [2024-10-15 04:43:42.206068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.716 [2024-10-15 04:43:42.206112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:52.716 [2024-10-15 04:43:42.206134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.922 ms 00:19:52.716 [2024-10-15 04:43:42.206177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.716 [2024-10-15 04:43:42.217589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.716 [2024-10-15 04:43:42.217635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:52.716 [2024-10-15 04:43:42.217653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.330 ms 00:19:52.716 [2024-10-15 04:43:42.217667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.716 [2024-10-15 04:43:42.217867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.716 [2024-10-15 04:43:42.217883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:52.716 [2024-10-15 04:43:42.217898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:52.716 [2024-10-15 04:43:42.217909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.976 [2024-10-15 04:43:42.233598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.976 [2024-10-15 04:43:42.233660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:52.976 [2024-10-15 04:43:42.233678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.685 ms 00:19:52.976 [2024-10-15 04:43:42.233689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.976 [2024-10-15 04:43:42.249138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.976 [2024-10-15 04:43:42.249181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:52.976 [2024-10-15 04:43:42.249205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.410 ms 00:19:52.976 [2024-10-15 04:43:42.249226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.976 [2024-10-15 04:43:42.263923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.976 [2024-10-15 04:43:42.263967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:52.976 [2024-10-15 04:43:42.263986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.661 ms 00:19:52.976 [2024-10-15 04:43:42.263997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.976 [2024-10-15 04:43:42.279352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.976 [2024-10-15 04:43:42.279530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:52.976 [2024-10-15 04:43:42.279559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.287 ms 00:19:52.976 [2024-10-15 04:43:42.279570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.976 [2024-10-15 04:43:42.279629] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:52.976 [2024-10-15 04:43:42.279647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:52.976 [2024-10-15 04:43:42.279813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.279842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.279858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.279869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.279899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.279910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.279924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.279936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.279952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.279963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.279989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.280984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.281007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.281018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.281033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.281044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.281060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:52.977 [2024-10-15 04:43:42.281078] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:52.978 [2024-10-15 04:43:42.281098] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88ef924c-9641-4650-a021-afc46cb8fbb7 00:19:52.978 [2024-10-15 04:43:42.281124] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:52.978 [2024-10-15 04:43:42.281146] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:52.978 [2024-10-15 04:43:42.281162] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:52.978 [2024-10-15 04:43:42.281178] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:52.978 [2024-10-15 04:43:42.281188] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:52.978 [2024-10-15 04:43:42.281203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:52.978 [2024-10-15 04:43:42.281232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:52.978 [2024-10-15 04:43:42.281247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:52.978 [2024-10-15 04:43:42.281256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:52.978 [2024-10-15 04:43:42.281271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.978 [2024-10-15 04:43:42.281281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:52.978 [2024-10-15 04:43:42.281298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.644 ms 00:19:52.978 [2024-10-15 04:43:42.281308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.978 [2024-10-15 04:43:42.302013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.978 [2024-10-15 04:43:42.302217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:52.978 [2024-10-15 04:43:42.302404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.699 ms 00:19:52.978 [2024-10-15 04:43:42.302448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.978 [2024-10-15 04:43:42.303100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.978 [2024-10-15 04:43:42.303221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:52.978 [2024-10-15 04:43:42.303311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:19:52.978 [2024-10-15 04:43:42.303350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.978 [2024-10-15 04:43:42.376016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.978 [2024-10-15 04:43:42.376262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.978 [2024-10-15 04:43:42.376355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.978 [2024-10-15 04:43:42.376393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.978 [2024-10-15 04:43:42.376562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.978 [2024-10-15 04:43:42.376599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.978 [2024-10-15 04:43:42.376633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.978 [2024-10-15 04:43:42.376723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.978 [2024-10-15 04:43:42.376839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.978 [2024-10-15 04:43:42.376894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.978 [2024-10-15 04:43:42.377037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.978 [2024-10-15 04:43:42.377128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.978 [2024-10-15 04:43:42.377186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.978 [2024-10-15 04:43:42.377233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.978 [2024-10-15 04:43:42.377287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.978 [2024-10-15 04:43:42.377371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-10-15 04:43:42.504813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-10-15 04:43:42.505161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.237 [2024-10-15 04:43:42.505401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-10-15 04:43:42.505454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-10-15 04:43:42.609695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-10-15 04:43:42.610020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.237 [2024-10-15 04:43:42.610201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-10-15 04:43:42.610244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-10-15 04:43:42.610396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-10-15 04:43:42.610467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.237 [2024-10-15 04:43:42.610587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-10-15 04:43:42.610619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-10-15 04:43:42.610678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-10-15 04:43:42.610712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.237 [2024-10-15 04:43:42.610746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-10-15 04:43:42.610776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-10-15 04:43:42.610946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-10-15 04:43:42.611012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.237 [2024-10-15 04:43:42.611115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-10-15 04:43:42.611146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-10-15 04:43:42.611211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-10-15 04:43:42.611246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:53.237 [2024-10-15 04:43:42.611279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-10-15 04:43:42.611308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-10-15 04:43:42.611370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-10-15 04:43:42.611403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.237 [2024-10-15 04:43:42.611542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-10-15 04:43:42.611639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-10-15 04:43:42.611707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-10-15 04:43:42.611740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.237 [2024-10-15 04:43:42.611781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-10-15 04:43:42.611825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-10-15 04:43:42.612003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 437.345 ms, result 0 00:19:54.616 04:43:43 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:54.616 04:43:43 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:54.616 [2024-10-15 04:43:43.763061] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:19:54.616 [2024-10-15 04:43:43.763185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76081 ] 00:19:54.616 [2024-10-15 04:43:43.936978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.616 [2024-10-15 04:43:44.056229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.199 [2024-10-15 04:43:44.431058] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.199 [2024-10-15 04:43:44.431320] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.199 [2024-10-15 04:43:44.593245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.593646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:55.199 [2024-10-15 04:43:44.593682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:55.199 [2024-10-15 04:43:44.593694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.596855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.596894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.199 [2024-10-15 04:43:44.596907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.133 ms 00:19:55.199 [2024-10-15 04:43:44.596917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.597031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:55.199 [2024-10-15 04:43:44.598013] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:55.199 [2024-10-15 04:43:44.598041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.598052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.199 [2024-10-15 04:43:44.598063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:19:55.199 [2024-10-15 04:43:44.598083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.599658] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:55.199 [2024-10-15 04:43:44.619240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.619458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:55.199 [2024-10-15 04:43:44.619489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.613 ms 00:19:55.199 [2024-10-15 04:43:44.619500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.619614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.619629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:55.199 [2024-10-15 04:43:44.619641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:55.199 [2024-10-15 04:43:44.619651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.626929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.626967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.199 [2024-10-15 04:43:44.626981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.243 ms 00:19:55.199 [2024-10-15 04:43:44.626991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.627098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.627114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.199 [2024-10-15 04:43:44.627125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:55.199 [2024-10-15 04:43:44.627136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.627168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.627179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:55.199 [2024-10-15 04:43:44.627194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:55.199 [2024-10-15 04:43:44.627204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.627230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:55.199 [2024-10-15 04:43:44.632071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.632108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.199 [2024-10-15 04:43:44.632121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.856 ms 00:19:55.199 [2024-10-15 04:43:44.632132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.632211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.632224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:55.199 [2024-10-15 04:43:44.632236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:55.199 [2024-10-15 04:43:44.632245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.632269] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:55.199 [2024-10-15 04:43:44.632293] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:55.199 [2024-10-15 04:43:44.632331] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:55.199 [2024-10-15 04:43:44.632349] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:55.199 [2024-10-15 04:43:44.632441] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:55.199 [2024-10-15 04:43:44.632454] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:55.199 [2024-10-15 04:43:44.632469] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:55.199 [2024-10-15 04:43:44.632481] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:55.199 [2024-10-15 04:43:44.632493] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:55.199 [2024-10-15 04:43:44.632505] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:55.199 [2024-10-15 04:43:44.632519] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:55.199 [2024-10-15 04:43:44.632528] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:55.199 [2024-10-15 04:43:44.632538] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:55.199 [2024-10-15 04:43:44.632549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.632559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:55.199 [2024-10-15 04:43:44.632570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:19:55.199 [2024-10-15 04:43:44.632580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.632656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.199 [2024-10-15 04:43:44.632667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:55.199 [2024-10-15 04:43:44.632678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:55.199 [2024-10-15 04:43:44.632692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.199 [2024-10-15 04:43:44.632780] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:55.199 [2024-10-15 04:43:44.632793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:55.199 [2024-10-15 04:43:44.632803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.199 [2024-10-15 04:43:44.632837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.199 [2024-10-15 04:43:44.632850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:55.199 [2024-10-15 04:43:44.632860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:55.199 [2024-10-15 04:43:44.632869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:55.199 [2024-10-15 04:43:44.632880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:55.199 [2024-10-15 04:43:44.632890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:55.199 [2024-10-15 04:43:44.632899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.199 [2024-10-15 04:43:44.632909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:55.199 [2024-10-15 04:43:44.632918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:55.199 [2024-10-15 04:43:44.632928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.199 [2024-10-15 04:43:44.632951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:55.199 [2024-10-15 04:43:44.632961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:55.199 [2024-10-15 04:43:44.632995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.199 [2024-10-15 04:43:44.633005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:55.199 [2024-10-15 04:43:44.633015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:55.199 [2024-10-15 04:43:44.633025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.199 [2024-10-15 04:43:44.633035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:55.199 [2024-10-15 04:43:44.633045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:55.199 [2024-10-15 04:43:44.633055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.199 [2024-10-15 04:43:44.633065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:55.200 [2024-10-15 04:43:44.633075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:55.200 [2024-10-15 04:43:44.633085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.200 [2024-10-15 04:43:44.633094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:55.200 [2024-10-15 04:43:44.633104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:55.200 [2024-10-15 04:43:44.633113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.200 [2024-10-15 04:43:44.633123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:55.200 [2024-10-15 04:43:44.633133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:55.200 [2024-10-15 04:43:44.633142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.200 [2024-10-15 04:43:44.633152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:55.200 [2024-10-15 04:43:44.633161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:55.200 [2024-10-15 04:43:44.633171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.200 [2024-10-15 04:43:44.633180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:55.200 [2024-10-15 04:43:44.633190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:55.200 [2024-10-15 04:43:44.633199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.200 [2024-10-15 04:43:44.633208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:55.200 [2024-10-15 04:43:44.633231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:55.200 [2024-10-15 04:43:44.633240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.200 [2024-10-15 04:43:44.633250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:55.200 [2024-10-15 04:43:44.633260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:55.200 [2024-10-15 04:43:44.633269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.200 [2024-10-15 04:43:44.633278] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:55.200 [2024-10-15 04:43:44.633289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:55.200 [2024-10-15 04:43:44.633301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.200 [2024-10-15 04:43:44.633312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.200 [2024-10-15 04:43:44.633324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:55.200 [2024-10-15 04:43:44.633334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:55.200 [2024-10-15 04:43:44.633344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:55.200 [2024-10-15 04:43:44.633354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:55.200 [2024-10-15 04:43:44.633363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:55.200 [2024-10-15 04:43:44.633385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:55.200 [2024-10-15 04:43:44.633396] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:55.200 [2024-10-15 04:43:44.633413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.200 [2024-10-15 04:43:44.633425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:55.200 [2024-10-15 04:43:44.633436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:55.200 [2024-10-15 04:43:44.633446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:55.200 [2024-10-15 04:43:44.633457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:55.200 [2024-10-15 04:43:44.633467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:55.200 [2024-10-15 04:43:44.633477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:55.200 [2024-10-15 04:43:44.633488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:55.200 [2024-10-15 04:43:44.633498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:55.200 [2024-10-15 04:43:44.633508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:55.200 [2024-10-15 04:43:44.633519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:55.200 [2024-10-15 04:43:44.633529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:55.200 [2024-10-15 04:43:44.633539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:55.200 [2024-10-15 04:43:44.633549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:55.200 [2024-10-15 04:43:44.633559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:55.200 [2024-10-15 04:43:44.633569] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:55.200 [2024-10-15 04:43:44.633581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.200 [2024-10-15 04:43:44.633592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:55.200 [2024-10-15 04:43:44.633602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:55.200 [2024-10-15 04:43:44.633612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:55.200 [2024-10-15 04:43:44.633622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:55.200 [2024-10-15 04:43:44.633633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.200 [2024-10-15 04:43:44.633643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:55.200 [2024-10-15 04:43:44.633657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:19:55.200 [2024-10-15 04:43:44.633672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.200 [2024-10-15 04:43:44.672990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.200 [2024-10-15 04:43:44.673052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.200 [2024-10-15 04:43:44.673068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.319 ms 00:19:55.200 [2024-10-15 04:43:44.673079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.200 [2024-10-15 04:43:44.673268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.200 [2024-10-15 04:43:44.673283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:55.200 [2024-10-15 04:43:44.673301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:55.200 [2024-10-15 04:43:44.673312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.460 [2024-10-15 04:43:44.751752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.460 [2024-10-15 04:43:44.752041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.460 [2024-10-15 04:43:44.752152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.536 ms 00:19:55.460 [2024-10-15 04:43:44.752208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.460 [2024-10-15 04:43:44.752421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.460 [2024-10-15 04:43:44.752464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.460 [2024-10-15 04:43:44.752497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:55.460 [2024-10-15 04:43:44.752596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.460 [2024-10-15 04:43:44.753163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.460 [2024-10-15 04:43:44.753310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.460 [2024-10-15 04:43:44.753391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:19:55.460 [2024-10-15 04:43:44.753440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.460 [2024-10-15 04:43:44.753599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.460 [2024-10-15 04:43:44.753619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.460 [2024-10-15 04:43:44.753633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:19:55.460 [2024-10-15 04:43:44.753646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.460 [2024-10-15 04:43:44.778609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.460 [2024-10-15 04:43:44.778784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.460 [2024-10-15 04:43:44.778946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.973 ms 00:19:55.460 [2024-10-15 04:43:44.778992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.460 [2024-10-15 04:43:44.799400] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:55.460 [2024-10-15 04:43:44.799456] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:55.460 [2024-10-15 04:43:44.799474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.460 [2024-10-15 04:43:44.799486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:55.460 [2024-10-15 04:43:44.799500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.325 ms 00:19:55.460 [2024-10-15 04:43:44.799510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.460 [2024-10-15 04:43:44.830689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.460 [2024-10-15 04:43:44.831008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:55.460 [2024-10-15 04:43:44.831036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.106 ms 00:19:55.460 [2024-10-15 04:43:44.831047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.460 [2024-10-15 04:43:44.850519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.460 [2024-10-15 04:43:44.850711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:55.460 [2024-10-15 04:43:44.850735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.354 ms 00:19:55.460 [2024-10-15 04:43:44.850745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.460 [2024-10-15 04:43:44.869390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.460 [2024-10-15 04:43:44.869597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:55.460 [2024-10-15 04:43:44.869622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.498 ms 00:19:55.460 [2024-10-15 04:43:44.869632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.460 [2024-10-15 04:43:44.870534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.460 [2024-10-15 04:43:44.870570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:55.460 [2024-10-15 04:43:44.870583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:19:55.460 [2024-10-15 04:43:44.870593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.460 [2024-10-15 04:43:44.958134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.460 [2024-10-15 04:43:44.958402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:55.460 [2024-10-15 04:43:44.958429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.649 ms 00:19:55.460 [2024-10-15 04:43:44.958441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.719 [2024-10-15 04:43:44.970922] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:55.719 [2024-10-15 04:43:44.987569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.719 [2024-10-15 04:43:44.987633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:55.719 [2024-10-15 04:43:44.987650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.973 ms 00:19:55.719 [2024-10-15 04:43:44.987661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.719 [2024-10-15 04:43:44.987788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.719 [2024-10-15 04:43:44.987806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:55.719 [2024-10-15 04:43:44.987840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:55.719 [2024-10-15 04:43:44.987851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.719 [2024-10-15 04:43:44.987912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.719 [2024-10-15 04:43:44.987925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:55.719 [2024-10-15 04:43:44.987935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:55.719 [2024-10-15 04:43:44.987946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.719 [2024-10-15 04:43:44.987978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.719 [2024-10-15 04:43:44.987996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:55.719 [2024-10-15 04:43:44.988010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:55.719 [2024-10-15 04:43:44.988020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.719 [2024-10-15 04:43:44.988058] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:55.719 [2024-10-15 04:43:44.988070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.719 [2024-10-15 04:43:44.988081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:55.719 [2024-10-15 04:43:44.988091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:55.719 [2024-10-15 04:43:44.988101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.719 [2024-10-15 04:43:45.025955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.719 [2024-10-15 04:43:45.026026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:55.719 [2024-10-15 04:43:45.026042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.892 ms 00:19:55.719 [2024-10-15 04:43:45.026053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.719 [2024-10-15 04:43:45.026188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.719 [2024-10-15 04:43:45.026203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:55.719 [2024-10-15 04:43:45.026215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:55.719 [2024-10-15 04:43:45.026225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.719 [2024-10-15 04:43:45.027293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.719 [2024-10-15 04:43:45.031680] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 434.421 ms, result 0 00:19:55.719 [2024-10-15 04:43:45.032562] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.719 [2024-10-15 04:43:45.051178] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.654  [2024-10-15T04:43:47.095Z] Copying: 30/256 [MB] (30 MBps) [2024-10-15T04:43:48.471Z] Copying: 58/256 [MB] (28 MBps) [2024-10-15T04:43:49.083Z] Copying: 93/256 [MB] (34 MBps) [2024-10-15T04:43:50.460Z] Copying: 125/256 [MB] (31 MBps) [2024-10-15T04:43:51.397Z] Copying: 154/256 [MB] (29 MBps) [2024-10-15T04:43:52.333Z] Copying: 182/256 [MB] (27 MBps) [2024-10-15T04:43:53.268Z] Copying: 210/256 [MB] (28 MBps) [2024-10-15T04:43:53.835Z] Copying: 238/256 [MB] (28 MBps) [2024-10-15T04:43:53.835Z] Copying: 256/256 [MB] (average 29 MBps)[2024-10-15 04:43:53.653588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:04.331 [2024-10-15 04:43:53.668648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.331 [2024-10-15 04:43:53.668836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:04.331 [2024-10-15 04:43:53.668862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:04.331 [2024-10-15 04:43:53.668874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.331 [2024-10-15 04:43:53.668905] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:04.331 [2024-10-15 04:43:53.673057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.331 [2024-10-15 04:43:53.673099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:04.331 [2024-10-15 04:43:53.673111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.141 ms 00:20:04.331 [2024-10-15 04:43:53.673138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.331 [2024-10-15 04:43:53.673374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.331 [2024-10-15 04:43:53.673388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:04.331 [2024-10-15 04:43:53.673399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:20:04.331 [2024-10-15 04:43:53.673408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.331 [2024-10-15 04:43:53.676273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.331 [2024-10-15 04:43:53.676407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:04.331 [2024-10-15 04:43:53.676428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.853 ms 00:20:04.331 [2024-10-15 04:43:53.676450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.331 [2024-10-15 04:43:53.682124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.331 [2024-10-15 04:43:53.682157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:04.331 [2024-10-15 04:43:53.682169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.656 ms 00:20:04.331 [2024-10-15 04:43:53.682179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.331 [2024-10-15 04:43:53.720750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.331 [2024-10-15 04:43:53.720855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:04.331 [2024-10-15 04:43:53.720890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.554 ms 00:20:04.331 [2024-10-15 04:43:53.720901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.331 [2024-10-15 04:43:53.742774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.331 [2024-10-15 04:43:53.742957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:04.331 [2024-10-15 04:43:53.742982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.807 ms 00:20:04.331 [2024-10-15 04:43:53.743009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.331 [2024-10-15 04:43:53.743165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.331 [2024-10-15 04:43:53.743180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:04.331 [2024-10-15 04:43:53.743191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:20:04.331 [2024-10-15 04:43:53.743202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.331 [2024-10-15 04:43:53.780547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.331 [2024-10-15 04:43:53.780600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:04.331 [2024-10-15 04:43:53.780615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.370 ms 00:20:04.331 [2024-10-15 04:43:53.780626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.331 [2024-10-15 04:43:53.817205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.331 [2024-10-15 04:43:53.817281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:04.331 [2024-10-15 04:43:53.817298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.570 ms 00:20:04.331 [2024-10-15 04:43:53.817309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.592 [2024-10-15 04:43:53.853472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.592 [2024-10-15 04:43:53.853693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:04.592 [2024-10-15 04:43:53.853718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.152 ms 00:20:04.592 [2024-10-15 04:43:53.853729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.592 [2024-10-15 04:43:53.890752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.592 [2024-10-15 04:43:53.890807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:04.592 [2024-10-15 04:43:53.890836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.899 ms 00:20:04.592 [2024-10-15 04:43:53.890846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.592 [2024-10-15 04:43:53.890914] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:04.592 [2024-10-15 04:43:53.890932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.890959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.890971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.890982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.890993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:04.592 [2024-10-15 04:43:53.891694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.891985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.892015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.892026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.892037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.892048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.892060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:04.593 [2024-10-15 04:43:53.892079] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:04.593 [2024-10-15 04:43:53.892090] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88ef924c-9641-4650-a021-afc46cb8fbb7 00:20:04.593 [2024-10-15 04:43:53.892102] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:04.593 [2024-10-15 04:43:53.892112] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:04.593 [2024-10-15 04:43:53.892123] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:04.593 [2024-10-15 04:43:53.892134] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:04.593 [2024-10-15 04:43:53.892144] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:04.593 [2024-10-15 04:43:53.892155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:04.593 [2024-10-15 04:43:53.892166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:04.593 [2024-10-15 04:43:53.892175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:04.593 [2024-10-15 04:43:53.892185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:04.593 [2024-10-15 04:43:53.892196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.593 [2024-10-15 04:43:53.892207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:04.593 [2024-10-15 04:43:53.892218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:20:04.593 [2024-10-15 04:43:53.892236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.593 [2024-10-15 04:43:53.912832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.593 [2024-10-15 04:43:53.912873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:04.593 [2024-10-15 04:43:53.912888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.589 ms 00:20:04.593 [2024-10-15 04:43:53.912899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.593 [2024-10-15 04:43:53.913450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.593 [2024-10-15 04:43:53.913498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:04.593 [2024-10-15 04:43:53.913510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:20:04.593 [2024-10-15 04:43:53.913520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.593 [2024-10-15 04:43:53.969487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.593 [2024-10-15 04:43:53.969553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.593 [2024-10-15 04:43:53.969569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.593 [2024-10-15 04:43:53.969580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.593 [2024-10-15 04:43:53.969703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.593 [2024-10-15 04:43:53.969723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.593 [2024-10-15 04:43:53.969733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.593 [2024-10-15 04:43:53.969748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.593 [2024-10-15 04:43:53.969808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.593 [2024-10-15 04:43:53.969841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.593 [2024-10-15 04:43:53.969852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.593 [2024-10-15 04:43:53.969862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.593 [2024-10-15 04:43:53.969883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.593 [2024-10-15 04:43:53.969893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.593 [2024-10-15 04:43:53.969903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.593 [2024-10-15 04:43:53.969918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.593 [2024-10-15 04:43:54.095661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.593 [2024-10-15 04:43:54.095728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.593 [2024-10-15 04:43:54.095745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.593 [2024-10-15 04:43:54.095756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.852 [2024-10-15 04:43:54.198893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.852 [2024-10-15 04:43:54.198963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.852 [2024-10-15 04:43:54.198988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.852 [2024-10-15 04:43:54.198999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.852 [2024-10-15 04:43:54.199092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.852 [2024-10-15 04:43:54.199105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.852 [2024-10-15 04:43:54.199116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.852 [2024-10-15 04:43:54.199127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.852 [2024-10-15 04:43:54.199157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.853 [2024-10-15 04:43:54.199168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.853 [2024-10-15 04:43:54.199179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.853 [2024-10-15 04:43:54.199189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-10-15 04:43:54.199305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.853 [2024-10-15 04:43:54.199319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.853 [2024-10-15 04:43:54.199329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.853 [2024-10-15 04:43:54.199340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-10-15 04:43:54.199375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.853 [2024-10-15 04:43:54.199388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:04.853 [2024-10-15 04:43:54.199398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.853 [2024-10-15 04:43:54.199409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-10-15 04:43:54.199453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.853 [2024-10-15 04:43:54.199465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.853 [2024-10-15 04:43:54.199475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.853 [2024-10-15 04:43:54.199485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-10-15 04:43:54.199529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.853 [2024-10-15 04:43:54.199541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.853 [2024-10-15 04:43:54.199551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.853 [2024-10-15 04:43:54.199561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-10-15 04:43:54.199701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.908 ms, result 0 00:20:05.790 00:20:05.790 00:20:05.790 04:43:55 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:05.790 04:43:55 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:06.358 04:43:55 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:06.358 [2024-10-15 04:43:55.854950] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:20:06.358 [2024-10-15 04:43:55.855402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76213 ] 00:20:06.617 [2024-10-15 04:43:56.027687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.877 [2024-10-15 04:43:56.158955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.173 [2024-10-15 04:43:56.534756] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:07.173 [2024-10-15 04:43:56.534857] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:07.459 [2024-10-15 04:43:56.698536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.698606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:07.459 [2024-10-15 04:43:56.698624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:07.459 [2024-10-15 04:43:56.698636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.702112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.702155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:07.459 [2024-10-15 04:43:56.702170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.443 ms 00:20:07.459 [2024-10-15 04:43:56.702181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.702298] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:07.459 [2024-10-15 04:43:56.703266] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:07.459 [2024-10-15 04:43:56.703301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.703313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:07.459 [2024-10-15 04:43:56.703325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:20:07.459 [2024-10-15 04:43:56.703335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.704966] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:07.459 [2024-10-15 04:43:56.725993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.726057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:07.459 [2024-10-15 04:43:56.726080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.060 ms 00:20:07.459 [2024-10-15 04:43:56.726091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.726217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.726233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:07.459 [2024-10-15 04:43:56.726245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:07.459 [2024-10-15 04:43:56.726255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.733527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.733570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:07.459 [2024-10-15 04:43:56.733584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.235 ms 00:20:07.459 [2024-10-15 04:43:56.733595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.733712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.733728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:07.459 [2024-10-15 04:43:56.733740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:07.459 [2024-10-15 04:43:56.733751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.733790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.733803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:07.459 [2024-10-15 04:43:56.733838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:07.459 [2024-10-15 04:43:56.733849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.733878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:07.459 [2024-10-15 04:43:56.739173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.739320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:07.459 [2024-10-15 04:43:56.739412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.310 ms 00:20:07.459 [2024-10-15 04:43:56.739450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.739560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.739625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:07.459 [2024-10-15 04:43:56.739706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:07.459 [2024-10-15 04:43:56.739739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.739800] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:07.459 [2024-10-15 04:43:56.739886] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:07.459 [2024-10-15 04:43:56.740177] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:07.459 [2024-10-15 04:43:56.740254] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:07.459 [2024-10-15 04:43:56.740574] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:07.459 [2024-10-15 04:43:56.740593] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:07.459 [2024-10-15 04:43:56.740607] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:07.459 [2024-10-15 04:43:56.740622] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:07.459 [2024-10-15 04:43:56.740636] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:07.459 [2024-10-15 04:43:56.740648] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:07.459 [2024-10-15 04:43:56.740665] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:07.459 [2024-10-15 04:43:56.740675] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:07.459 [2024-10-15 04:43:56.740686] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:07.459 [2024-10-15 04:43:56.740698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.740710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:07.459 [2024-10-15 04:43:56.740721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:20:07.459 [2024-10-15 04:43:56.740732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.740851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.459 [2024-10-15 04:43:56.740865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:07.459 [2024-10-15 04:43:56.740877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:07.459 [2024-10-15 04:43:56.740891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.459 [2024-10-15 04:43:56.740990] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:07.459 [2024-10-15 04:43:56.741004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:07.459 [2024-10-15 04:43:56.741016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:07.459 [2024-10-15 04:43:56.741028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:07.459 [2024-10-15 04:43:56.741049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:07.459 [2024-10-15 04:43:56.741069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:07.459 [2024-10-15 04:43:56.741079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:07.459 [2024-10-15 04:43:56.741099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:07.459 [2024-10-15 04:43:56.741109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:07.459 [2024-10-15 04:43:56.741119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:07.459 [2024-10-15 04:43:56.741150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:07.459 [2024-10-15 04:43:56.741161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:07.459 [2024-10-15 04:43:56.741171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:07.459 [2024-10-15 04:43:56.741191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:07.459 [2024-10-15 04:43:56.741200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:07.459 [2024-10-15 04:43:56.741231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.459 [2024-10-15 04:43:56.741251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:07.459 [2024-10-15 04:43:56.741261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.459 [2024-10-15 04:43:56.741280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:07.459 [2024-10-15 04:43:56.741290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.459 [2024-10-15 04:43:56.741309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:07.459 [2024-10-15 04:43:56.741319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.459 [2024-10-15 04:43:56.741338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:07.459 [2024-10-15 04:43:56.741348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:07.459 [2024-10-15 04:43:56.741367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:07.459 [2024-10-15 04:43:56.741376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:07.459 [2024-10-15 04:43:56.741385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:07.459 [2024-10-15 04:43:56.741395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:07.459 [2024-10-15 04:43:56.741405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:07.459 [2024-10-15 04:43:56.741414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:07.459 [2024-10-15 04:43:56.741449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:07.459 [2024-10-15 04:43:56.741459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.459 [2024-10-15 04:43:56.741468] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:07.460 [2024-10-15 04:43:56.741479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:07.460 [2024-10-15 04:43:56.741490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:07.460 [2024-10-15 04:43:56.741500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.460 [2024-10-15 04:43:56.741510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:07.460 [2024-10-15 04:43:56.741520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:07.460 [2024-10-15 04:43:56.741530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:07.460 [2024-10-15 04:43:56.741540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:07.460 [2024-10-15 04:43:56.741550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:07.460 [2024-10-15 04:43:56.741562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:07.460 [2024-10-15 04:43:56.741574] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:07.460 [2024-10-15 04:43:56.741592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:07.460 [2024-10-15 04:43:56.741605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:07.460 [2024-10-15 04:43:56.741627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:07.460 [2024-10-15 04:43:56.741638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:07.460 [2024-10-15 04:43:56.741665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:07.460 [2024-10-15 04:43:56.741677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:07.460 [2024-10-15 04:43:56.741688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:07.460 [2024-10-15 04:43:56.741699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:07.460 [2024-10-15 04:43:56.741710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:07.460 [2024-10-15 04:43:56.741721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:07.460 [2024-10-15 04:43:56.741732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:07.460 [2024-10-15 04:43:56.741743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:07.460 [2024-10-15 04:43:56.741754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:07.460 [2024-10-15 04:43:56.741764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:07.460 [2024-10-15 04:43:56.741776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:07.460 [2024-10-15 04:43:56.741787] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:07.460 [2024-10-15 04:43:56.741799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:07.460 [2024-10-15 04:43:56.741811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:07.460 [2024-10-15 04:43:56.741833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:07.460 [2024-10-15 04:43:56.741843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:07.460 [2024-10-15 04:43:56.741870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:07.460 [2024-10-15 04:43:56.741892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.741904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:07.460 [2024-10-15 04:43:56.741915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:20:07.460 [2024-10-15 04:43:56.741930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.783710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.783761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:07.460 [2024-10-15 04:43:56.783777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.786 ms 00:20:07.460 [2024-10-15 04:43:56.783789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.783967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.783982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:07.460 [2024-10-15 04:43:56.783999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:07.460 [2024-10-15 04:43:56.784009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.843027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.843083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:07.460 [2024-10-15 04:43:56.843098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.086 ms 00:20:07.460 [2024-10-15 04:43:56.843109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.843251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.843265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:07.460 [2024-10-15 04:43:56.843277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:07.460 [2024-10-15 04:43:56.843287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.843724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.843737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:07.460 [2024-10-15 04:43:56.843748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:20:07.460 [2024-10-15 04:43:56.843759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.843915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.843934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:07.460 [2024-10-15 04:43:56.843945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:20:07.460 [2024-10-15 04:43:56.843955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.863645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.863696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:07.460 [2024-10-15 04:43:56.863711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.696 ms 00:20:07.460 [2024-10-15 04:43:56.863723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.883536] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:07.460 [2024-10-15 04:43:56.883585] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:07.460 [2024-10-15 04:43:56.883602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.883613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:07.460 [2024-10-15 04:43:56.883626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.735 ms 00:20:07.460 [2024-10-15 04:43:56.883637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.913494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.913563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:07.460 [2024-10-15 04:43:56.913580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.802 ms 00:20:07.460 [2024-10-15 04:43:56.913591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.932332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.932380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:07.460 [2024-10-15 04:43:56.932395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.664 ms 00:20:07.460 [2024-10-15 04:43:56.932405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.950469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.950517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:07.460 [2024-10-15 04:43:56.950531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.000 ms 00:20:07.460 [2024-10-15 04:43:56.950542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.460 [2024-10-15 04:43:56.951324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.460 [2024-10-15 04:43:56.951358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:07.460 [2024-10-15 04:43:56.951371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:20:07.460 [2024-10-15 04:43:56.951382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.719 [2024-10-15 04:43:57.037700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.719 [2024-10-15 04:43:57.037770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:07.719 [2024-10-15 04:43:57.037787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.426 ms 00:20:07.719 [2024-10-15 04:43:57.037798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.719 [2024-10-15 04:43:57.050269] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:07.719 [2024-10-15 04:43:57.066864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.719 [2024-10-15 04:43:57.067131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:07.719 [2024-10-15 04:43:57.067159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.945 ms 00:20:07.719 [2024-10-15 04:43:57.067171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.719 [2024-10-15 04:43:57.067314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.719 [2024-10-15 04:43:57.067328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:07.719 [2024-10-15 04:43:57.067340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:07.719 [2024-10-15 04:43:57.067350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.719 [2024-10-15 04:43:57.067404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.719 [2024-10-15 04:43:57.067416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:07.719 [2024-10-15 04:43:57.067427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:07.719 [2024-10-15 04:43:57.067437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.719 [2024-10-15 04:43:57.067473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.719 [2024-10-15 04:43:57.067489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:07.719 [2024-10-15 04:43:57.067499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:07.719 [2024-10-15 04:43:57.067510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.719 [2024-10-15 04:43:57.067546] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:07.719 [2024-10-15 04:43:57.067558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.719 [2024-10-15 04:43:57.067569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:07.719 [2024-10-15 04:43:57.067579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:07.719 [2024-10-15 04:43:57.067589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.719 [2024-10-15 04:43:57.105161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.719 [2024-10-15 04:43:57.105217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:07.719 [2024-10-15 04:43:57.105243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.610 ms 00:20:07.719 [2024-10-15 04:43:57.105255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.719 [2024-10-15 04:43:57.105400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.719 [2024-10-15 04:43:57.105414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:07.719 [2024-10-15 04:43:57.105426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:07.719 [2024-10-15 04:43:57.105437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.719 [2024-10-15 04:43:57.106533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:07.719 [2024-10-15 04:43:57.111187] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 408.373 ms, result 0 00:20:07.719 [2024-10-15 04:43:57.112109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:07.719 [2024-10-15 04:43:57.131379] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:07.977  [2024-10-15T04:43:57.481Z] Copying: 4096/4096 [kB] (average 32 MBps)[2024-10-15 04:43:57.261984] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:07.977 [2024-10-15 04:43:57.276506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.977 [2024-10-15 04:43:57.276677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:07.978 [2024-10-15 04:43:57.276703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:07.978 [2024-10-15 04:43:57.276715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.978 [2024-10-15 04:43:57.276757] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:07.978 [2024-10-15 04:43:57.280834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.978 [2024-10-15 04:43:57.280863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:07.978 [2024-10-15 04:43:57.280875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.067 ms 00:20:07.978 [2024-10-15 04:43:57.280885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.978 [2024-10-15 04:43:57.282791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.978 [2024-10-15 04:43:57.282846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:07.978 [2024-10-15 04:43:57.282859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.884 ms 00:20:07.978 [2024-10-15 04:43:57.282870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.978 [2024-10-15 04:43:57.286001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.978 [2024-10-15 04:43:57.286034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:07.978 [2024-10-15 04:43:57.286052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:20:07.978 [2024-10-15 04:43:57.286061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.978 [2024-10-15 04:43:57.291705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.978 [2024-10-15 04:43:57.291740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:07.978 [2024-10-15 04:43:57.291752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.621 ms 00:20:07.978 [2024-10-15 04:43:57.291777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.978 [2024-10-15 04:43:57.328360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.978 [2024-10-15 04:43:57.328412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:07.978 [2024-10-15 04:43:57.328427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.557 ms 00:20:07.978 [2024-10-15 04:43:57.328438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.978 [2024-10-15 04:43:57.350915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.978 [2024-10-15 04:43:57.351124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:07.978 [2024-10-15 04:43:57.351160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.449 ms 00:20:07.978 [2024-10-15 04:43:57.351174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.978 [2024-10-15 04:43:57.351383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.978 [2024-10-15 04:43:57.351396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:07.978 [2024-10-15 04:43:57.351408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:20:07.978 [2024-10-15 04:43:57.351418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.978 [2024-10-15 04:43:57.388763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.978 [2024-10-15 04:43:57.388810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:07.978 [2024-10-15 04:43:57.388838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.375 ms 00:20:07.978 [2024-10-15 04:43:57.388849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.978 [2024-10-15 04:43:57.425333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.978 [2024-10-15 04:43:57.425380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:07.978 [2024-10-15 04:43:57.425395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.453 ms 00:20:07.978 [2024-10-15 04:43:57.425405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.978 [2024-10-15 04:43:57.461272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.978 [2024-10-15 04:43:57.461330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:07.978 [2024-10-15 04:43:57.461345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.862 ms 00:20:07.978 [2024-10-15 04:43:57.461355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.237 [2024-10-15 04:43:57.496451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.237 [2024-10-15 04:43:57.496490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:08.237 [2024-10-15 04:43:57.496503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.065 ms 00:20:08.237 [2024-10-15 04:43:57.496513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.237 [2024-10-15 04:43:57.496598] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:08.237 [2024-10-15 04:43:57.496615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:08.237 [2024-10-15 04:43:57.496628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:08.237 [2024-10-15 04:43:57.496638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:08.237 [2024-10-15 04:43:57.496649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:08.237 [2024-10-15 04:43:57.496659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:08.237 [2024-10-15 04:43:57.496669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:08.237 [2024-10-15 04:43:57.496680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:08.237 [2024-10-15 04:43:57.496690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:08.237 [2024-10-15 04:43:57.496701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:08.237 [2024-10-15 04:43:57.496711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.496998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:08.238 [2024-10-15 04:43:57.497729] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:08.238 [2024-10-15 04:43:57.497739] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88ef924c-9641-4650-a021-afc46cb8fbb7 00:20:08.238 [2024-10-15 04:43:57.497750] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:08.238 [2024-10-15 04:43:57.497759] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:08.238 [2024-10-15 04:43:57.497769] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:08.238 [2024-10-15 04:43:57.497779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:08.239 [2024-10-15 04:43:57.497788] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:08.239 [2024-10-15 04:43:57.497799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:08.239 [2024-10-15 04:43:57.497809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:08.239 [2024-10-15 04:43:57.497827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:08.239 [2024-10-15 04:43:57.497837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:08.239 [2024-10-15 04:43:57.497846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.239 [2024-10-15 04:43:57.497857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:08.239 [2024-10-15 04:43:57.497871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:20:08.239 [2024-10-15 04:43:57.497881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.239 [2024-10-15 04:43:57.517657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.239 [2024-10-15 04:43:57.517697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:08.239 [2024-10-15 04:43:57.517710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.785 ms 00:20:08.239 [2024-10-15 04:43:57.517721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.239 [2024-10-15 04:43:57.518288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.239 [2024-10-15 04:43:57.518301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:08.239 [2024-10-15 04:43:57.518312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:20:08.239 [2024-10-15 04:43:57.518322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.239 [2024-10-15 04:43:57.573946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.239 [2024-10-15 04:43:57.574013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.239 [2024-10-15 04:43:57.574028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.239 [2024-10-15 04:43:57.574038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.239 [2024-10-15 04:43:57.574145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.239 [2024-10-15 04:43:57.574157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.239 [2024-10-15 04:43:57.574167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.239 [2024-10-15 04:43:57.574177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.239 [2024-10-15 04:43:57.574232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.239 [2024-10-15 04:43:57.574245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.239 [2024-10-15 04:43:57.574254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.239 [2024-10-15 04:43:57.574264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.239 [2024-10-15 04:43:57.574284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.239 [2024-10-15 04:43:57.574298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.239 [2024-10-15 04:43:57.574308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.239 [2024-10-15 04:43:57.574318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.239 [2024-10-15 04:43:57.698614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.239 [2024-10-15 04:43:57.698684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.239 [2024-10-15 04:43:57.698699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.239 [2024-10-15 04:43:57.698710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.499 [2024-10-15 04:43:57.802170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.499 [2024-10-15 04:43:57.802247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.499 [2024-10-15 04:43:57.802263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.499 [2024-10-15 04:43:57.802274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.499 [2024-10-15 04:43:57.802366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.499 [2024-10-15 04:43:57.802378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.499 [2024-10-15 04:43:57.802389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.499 [2024-10-15 04:43:57.802399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.499 [2024-10-15 04:43:57.802427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.499 [2024-10-15 04:43:57.802438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.499 [2024-10-15 04:43:57.802448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.499 [2024-10-15 04:43:57.802462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.499 [2024-10-15 04:43:57.802579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.499 [2024-10-15 04:43:57.802591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.499 [2024-10-15 04:43:57.802602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.499 [2024-10-15 04:43:57.802612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.499 [2024-10-15 04:43:57.802648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.499 [2024-10-15 04:43:57.802660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:08.499 [2024-10-15 04:43:57.802670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.499 [2024-10-15 04:43:57.802684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.499 [2024-10-15 04:43:57.802722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.499 [2024-10-15 04:43:57.802733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.499 [2024-10-15 04:43:57.802743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.499 [2024-10-15 04:43:57.802753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.499 [2024-10-15 04:43:57.802795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.499 [2024-10-15 04:43:57.802807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.499 [2024-10-15 04:43:57.802834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.499 [2024-10-15 04:43:57.802848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.499 [2024-10-15 04:43:57.802987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.357 ms, result 0 00:20:09.434 00:20:09.434 00:20:09.434 04:43:58 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76247 00:20:09.434 04:43:58 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:09.434 04:43:58 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76247 00:20:09.434 04:43:58 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76247 ']' 00:20:09.434 04:43:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.434 04:43:58 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.434 04:43:58 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.434 04:43:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.434 04:43:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:09.693 [2024-10-15 04:43:58.987450] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:20:09.693 [2024-10-15 04:43:58.987590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76247 ] 00:20:09.693 [2024-10-15 04:43:59.156604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.982 [2024-10-15 04:43:59.276243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.922 04:44:00 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.922 04:44:00 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:20:10.922 04:44:00 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:10.922 [2024-10-15 04:44:00.377361] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:10.922 [2024-10-15 04:44:00.377432] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:11.185 [2024-10-15 04:44:00.566582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.566645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:11.185 [2024-10-15 04:44:00.566666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:11.185 [2024-10-15 04:44:00.566677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.570657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.570726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.185 [2024-10-15 04:44:00.570742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.962 ms 00:20:11.185 [2024-10-15 04:44:00.570754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.570915] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:11.185 [2024-10-15 04:44:00.572011] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:11.185 [2024-10-15 04:44:00.572047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.572059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.185 [2024-10-15 04:44:00.572072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:20:11.185 [2024-10-15 04:44:00.572082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.573581] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:11.185 [2024-10-15 04:44:00.593834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.593919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:11.185 [2024-10-15 04:44:00.593935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.290 ms 00:20:11.185 [2024-10-15 04:44:00.593968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.594116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.594137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:11.185 [2024-10-15 04:44:00.594150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:11.185 [2024-10-15 04:44:00.594166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.601704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.601766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.185 [2024-10-15 04:44:00.601780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.489 ms 00:20:11.185 [2024-10-15 04:44:00.601795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.601943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.601963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.185 [2024-10-15 04:44:00.601975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:11.185 [2024-10-15 04:44:00.601988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.602019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.602039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:11.185 [2024-10-15 04:44:00.602050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:11.185 [2024-10-15 04:44:00.602063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.602091] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:11.185 [2024-10-15 04:44:00.607166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.607208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.185 [2024-10-15 04:44:00.607223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.087 ms 00:20:11.185 [2024-10-15 04:44:00.607234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.607319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.607331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:11.185 [2024-10-15 04:44:00.607344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:11.185 [2024-10-15 04:44:00.607354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.607381] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:11.185 [2024-10-15 04:44:00.607406] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:11.185 [2024-10-15 04:44:00.607465] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:11.185 [2024-10-15 04:44:00.607486] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:11.185 [2024-10-15 04:44:00.607584] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:11.185 [2024-10-15 04:44:00.607598] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:11.185 [2024-10-15 04:44:00.607617] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:11.185 [2024-10-15 04:44:00.607630] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:11.185 [2024-10-15 04:44:00.607657] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:11.185 [2024-10-15 04:44:00.607669] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:11.185 [2024-10-15 04:44:00.607684] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:11.185 [2024-10-15 04:44:00.607694] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:11.185 [2024-10-15 04:44:00.607713] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:11.185 [2024-10-15 04:44:00.607724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.607739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:11.185 [2024-10-15 04:44:00.607750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:20:11.185 [2024-10-15 04:44:00.607764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.607857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.185 [2024-10-15 04:44:00.607874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:11.185 [2024-10-15 04:44:00.607889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:11.185 [2024-10-15 04:44:00.607905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.185 [2024-10-15 04:44:00.607995] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:11.185 [2024-10-15 04:44:00.608014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:11.185 [2024-10-15 04:44:00.608024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.185 [2024-10-15 04:44:00.608039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.185 [2024-10-15 04:44:00.608050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:11.185 [2024-10-15 04:44:00.608064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:11.185 [2024-10-15 04:44:00.608073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:11.185 [2024-10-15 04:44:00.608094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:11.185 [2024-10-15 04:44:00.608104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:11.185 [2024-10-15 04:44:00.608118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.185 [2024-10-15 04:44:00.608127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:11.185 [2024-10-15 04:44:00.608141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:11.185 [2024-10-15 04:44:00.608151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.185 [2024-10-15 04:44:00.608166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:11.185 [2024-10-15 04:44:00.608177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:11.185 [2024-10-15 04:44:00.608191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.185 [2024-10-15 04:44:00.608201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:11.185 [2024-10-15 04:44:00.608215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:11.185 [2024-10-15 04:44:00.608224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.185 [2024-10-15 04:44:00.608235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:11.185 [2024-10-15 04:44:00.608255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:11.185 [2024-10-15 04:44:00.608267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.185 [2024-10-15 04:44:00.608277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:11.185 [2024-10-15 04:44:00.608291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:11.185 [2024-10-15 04:44:00.608300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.185 [2024-10-15 04:44:00.608311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:11.185 [2024-10-15 04:44:00.608320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:11.185 [2024-10-15 04:44:00.608335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.185 [2024-10-15 04:44:00.608344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:11.185 [2024-10-15 04:44:00.608359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:11.185 [2024-10-15 04:44:00.608368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.185 [2024-10-15 04:44:00.608381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:11.185 [2024-10-15 04:44:00.608390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:11.185 [2024-10-15 04:44:00.608402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.185 [2024-10-15 04:44:00.608411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:11.185 [2024-10-15 04:44:00.608423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:11.185 [2024-10-15 04:44:00.608432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.185 [2024-10-15 04:44:00.608444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:11.185 [2024-10-15 04:44:00.608453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:11.185 [2024-10-15 04:44:00.608467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.186 [2024-10-15 04:44:00.608476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:11.186 [2024-10-15 04:44:00.608492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:11.186 [2024-10-15 04:44:00.608501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.186 [2024-10-15 04:44:00.608515] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:11.186 [2024-10-15 04:44:00.608526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:11.186 [2024-10-15 04:44:00.608540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.186 [2024-10-15 04:44:00.608556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.186 [2024-10-15 04:44:00.608571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:11.186 [2024-10-15 04:44:00.608581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:11.186 [2024-10-15 04:44:00.608595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:11.186 [2024-10-15 04:44:00.608605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:11.186 [2024-10-15 04:44:00.608619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:11.186 [2024-10-15 04:44:00.608629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:11.186 [2024-10-15 04:44:00.608645] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:11.186 [2024-10-15 04:44:00.608658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.186 [2024-10-15 04:44:00.608680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:11.186 [2024-10-15 04:44:00.608690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:11.186 [2024-10-15 04:44:00.608705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:11.186 [2024-10-15 04:44:00.608716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:11.186 [2024-10-15 04:44:00.608731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:11.186 [2024-10-15 04:44:00.608742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:11.186 [2024-10-15 04:44:00.608757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:11.186 [2024-10-15 04:44:00.608767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:11.186 [2024-10-15 04:44:00.608782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:11.186 [2024-10-15 04:44:00.608793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:11.186 [2024-10-15 04:44:00.608807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:11.186 [2024-10-15 04:44:00.608828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:11.186 [2024-10-15 04:44:00.608844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:11.186 [2024-10-15 04:44:00.608855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:11.186 [2024-10-15 04:44:00.608887] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:11.186 [2024-10-15 04:44:00.608899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.186 [2024-10-15 04:44:00.608932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:11.186 [2024-10-15 04:44:00.608943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:11.186 [2024-10-15 04:44:00.608958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:11.186 [2024-10-15 04:44:00.608969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:11.186 [2024-10-15 04:44:00.608985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.186 [2024-10-15 04:44:00.608996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:11.186 [2024-10-15 04:44:00.609011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:20:11.186 [2024-10-15 04:44:00.609022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.186 [2024-10-15 04:44:00.652085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.186 [2024-10-15 04:44:00.652149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.186 [2024-10-15 04:44:00.652170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.062 ms 00:20:11.186 [2024-10-15 04:44:00.652181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.186 [2024-10-15 04:44:00.652352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.186 [2024-10-15 04:44:00.652371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:11.186 [2024-10-15 04:44:00.652387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:11.186 [2024-10-15 04:44:00.652398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.704422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.704481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.459 [2024-10-15 04:44:00.704505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.071 ms 00:20:11.459 [2024-10-15 04:44:00.704524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.704667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.704681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.459 [2024-10-15 04:44:00.704699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.459 [2024-10-15 04:44:00.704710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.705181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.705208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.459 [2024-10-15 04:44:00.705241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:20:11.459 [2024-10-15 04:44:00.705253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.705400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.705421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.459 [2024-10-15 04:44:00.705439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:11.459 [2024-10-15 04:44:00.705450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.728896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.728955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.459 [2024-10-15 04:44:00.728978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.448 ms 00:20:11.459 [2024-10-15 04:44:00.728991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.750949] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:11.459 [2024-10-15 04:44:00.751033] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:11.459 [2024-10-15 04:44:00.751056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.751069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:11.459 [2024-10-15 04:44:00.751088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.939 ms 00:20:11.459 [2024-10-15 04:44:00.751099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.784634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.784715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:11.459 [2024-10-15 04:44:00.784739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.370 ms 00:20:11.459 [2024-10-15 04:44:00.784752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.805610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.805679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:11.459 [2024-10-15 04:44:00.805708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.723 ms 00:20:11.459 [2024-10-15 04:44:00.805719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.826244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.826326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:11.459 [2024-10-15 04:44:00.826348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.372 ms 00:20:11.459 [2024-10-15 04:44:00.826359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.827293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.827328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:11.459 [2024-10-15 04:44:00.827347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:20:11.459 [2024-10-15 04:44:00.827358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.927018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.927091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:11.459 [2024-10-15 04:44:00.927111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.778 ms 00:20:11.459 [2024-10-15 04:44:00.927122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.940383] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:11.459 [2024-10-15 04:44:00.957502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.957584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:11.459 [2024-10-15 04:44:00.957601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.243 ms 00:20:11.459 [2024-10-15 04:44:00.957616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.957737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.957755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:11.459 [2024-10-15 04:44:00.957766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:11.459 [2024-10-15 04:44:00.957788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.957860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.957880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:11.459 [2024-10-15 04:44:00.957891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:11.459 [2024-10-15 04:44:00.957908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.957935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.957958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:11.459 [2024-10-15 04:44:00.957970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:11.459 [2024-10-15 04:44:00.957985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.459 [2024-10-15 04:44:00.958029] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:11.459 [2024-10-15 04:44:00.958053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.459 [2024-10-15 04:44:00.958065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:11.459 [2024-10-15 04:44:00.958080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:11.459 [2024-10-15 04:44:00.958097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.718 [2024-10-15 04:44:00.994618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.718 [2024-10-15 04:44:00.994714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:11.718 [2024-10-15 04:44:00.994746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.534 ms 00:20:11.718 [2024-10-15 04:44:00.994764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.718 [2024-10-15 04:44:00.994975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.718 [2024-10-15 04:44:00.994996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:11.718 [2024-10-15 04:44:00.995020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:11.718 [2024-10-15 04:44:00.995036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.718 [2024-10-15 04:44:00.996231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.718 [2024-10-15 04:44:01.001944] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 429.936 ms, result 0 00:20:11.718 [2024-10-15 04:44:01.002803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:11.718 Some configs were skipped because the RPC state that can call them passed over. 00:20:11.718 04:44:01 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:11.977 [2024-10-15 04:44:01.255172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.977 [2024-10-15 04:44:01.255245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:11.977 [2024-10-15 04:44:01.255263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:20:11.977 [2024-10-15 04:44:01.255280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.977 [2024-10-15 04:44:01.255322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.433 ms, result 0 00:20:11.977 true 00:20:11.977 04:44:01 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:11.977 [2024-10-15 04:44:01.446876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.977 [2024-10-15 04:44:01.446924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:11.977 [2024-10-15 04:44:01.446942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:20:11.977 [2024-10-15 04:44:01.446952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.977 [2024-10-15 04:44:01.446993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.281 ms, result 0 00:20:11.977 true 00:20:11.977 04:44:01 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76247 00:20:11.977 04:44:01 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76247 ']' 00:20:11.977 04:44:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76247 00:20:11.977 04:44:01 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:20:11.977 04:44:01 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:11.977 04:44:01 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76247 00:20:12.235 04:44:01 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:12.235 killing process with pid 76247 00:20:12.235 04:44:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:12.235 04:44:01 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76247' 00:20:12.235 04:44:01 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76247 00:20:12.235 04:44:01 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76247 00:20:13.170 [2024-10-15 04:44:02.631326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.170 [2024-10-15 04:44:02.631384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:13.170 [2024-10-15 04:44:02.631399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:13.170 [2024-10-15 04:44:02.631411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.170 [2024-10-15 04:44:02.631435] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:13.170 [2024-10-15 04:44:02.635602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.170 [2024-10-15 04:44:02.635635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:13.170 [2024-10-15 04:44:02.635656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.153 ms 00:20:13.170 [2024-10-15 04:44:02.635666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.170 [2024-10-15 04:44:02.635928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.170 [2024-10-15 04:44:02.635946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:13.170 [2024-10-15 04:44:02.635959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:20:13.170 [2024-10-15 04:44:02.635969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.170 [2024-10-15 04:44:02.639205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.170 [2024-10-15 04:44:02.639236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:13.170 [2024-10-15 04:44:02.639250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:20:13.170 [2024-10-15 04:44:02.639264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.170 [2024-10-15 04:44:02.644928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.170 [2024-10-15 04:44:02.644959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:13.170 [2024-10-15 04:44:02.644973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.631 ms 00:20:13.171 [2024-10-15 04:44:02.644983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.171 [2024-10-15 04:44:02.659989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.171 [2024-10-15 04:44:02.660021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:13.171 [2024-10-15 04:44:02.660041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.972 ms 00:20:13.171 [2024-10-15 04:44:02.660062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.171 [2024-10-15 04:44:02.670375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.171 [2024-10-15 04:44:02.670408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:13.171 [2024-10-15 04:44:02.670425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.272 ms 00:20:13.171 [2024-10-15 04:44:02.670438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.171 [2024-10-15 04:44:02.670567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.171 [2024-10-15 04:44:02.670580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:13.171 [2024-10-15 04:44:02.670593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:13.171 [2024-10-15 04:44:02.670603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.430 [2024-10-15 04:44:02.685761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.430 [2024-10-15 04:44:02.685795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:13.430 [2024-10-15 04:44:02.685821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.150 ms 00:20:13.430 [2024-10-15 04:44:02.685832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.430 [2024-10-15 04:44:02.700723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.430 [2024-10-15 04:44:02.700755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:13.430 [2024-10-15 04:44:02.700781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.855 ms 00:20:13.430 [2024-10-15 04:44:02.700791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.430 [2024-10-15 04:44:02.715421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.430 [2024-10-15 04:44:02.715452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:13.430 [2024-10-15 04:44:02.715471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.579 ms 00:20:13.430 [2024-10-15 04:44:02.715481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.430 [2024-10-15 04:44:02.729750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.430 [2024-10-15 04:44:02.729783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:13.430 [2024-10-15 04:44:02.729802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.204 ms 00:20:13.430 [2024-10-15 04:44:02.729811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.430 [2024-10-15 04:44:02.729903] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:13.430 [2024-10-15 04:44:02.729923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:13.430 [2024-10-15 04:44:02.729940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:13.430 [2024-10-15 04:44:02.729953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:13.430 [2024-10-15 04:44:02.729970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:13.430 [2024-10-15 04:44:02.729981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:13.430 [2024-10-15 04:44:02.730002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:13.430 [2024-10-15 04:44:02.730013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:13.430 [2024-10-15 04:44:02.730029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:13.430 [2024-10-15 04:44:02.730040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:13.430 [2024-10-15 04:44:02.730056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:13.430 [2024-10-15 04:44:02.730067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:13.430 [2024-10-15 04:44:02.730082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.730997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:13.431 [2024-10-15 04:44:02.731308] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:13.431 [2024-10-15 04:44:02.731328] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88ef924c-9641-4650-a021-afc46cb8fbb7 00:20:13.431 [2024-10-15 04:44:02.731351] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:13.431 [2024-10-15 04:44:02.731372] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:13.431 [2024-10-15 04:44:02.731387] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:13.431 [2024-10-15 04:44:02.731402] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:13.431 [2024-10-15 04:44:02.731412] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:13.432 [2024-10-15 04:44:02.731427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:13.432 [2024-10-15 04:44:02.731437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:13.432 [2024-10-15 04:44:02.731450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:13.432 [2024-10-15 04:44:02.731460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:13.432 [2024-10-15 04:44:02.731475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.432 [2024-10-15 04:44:02.731486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:13.432 [2024-10-15 04:44:02.731501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:20:13.432 [2024-10-15 04:44:02.731511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.432 [2024-10-15 04:44:02.751576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.432 [2024-10-15 04:44:02.751609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:13.432 [2024-10-15 04:44:02.751633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.065 ms 00:20:13.432 [2024-10-15 04:44:02.751643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.432 [2024-10-15 04:44:02.752215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.432 [2024-10-15 04:44:02.752232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:13.432 [2024-10-15 04:44:02.752247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:20:13.432 [2024-10-15 04:44:02.752258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.432 [2024-10-15 04:44:02.821975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.432 [2024-10-15 04:44:02.822027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.432 [2024-10-15 04:44:02.822047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.432 [2024-10-15 04:44:02.822058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.432 [2024-10-15 04:44:02.822182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.432 [2024-10-15 04:44:02.822196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.432 [2024-10-15 04:44:02.822212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.432 [2024-10-15 04:44:02.822223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.432 [2024-10-15 04:44:02.822287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.432 [2024-10-15 04:44:02.822300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.432 [2024-10-15 04:44:02.822320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.432 [2024-10-15 04:44:02.822330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.432 [2024-10-15 04:44:02.822354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.432 [2024-10-15 04:44:02.822366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.432 [2024-10-15 04:44:02.822381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.432 [2024-10-15 04:44:02.822391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-10-15 04:44:02.947182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.691 [2024-10-15 04:44:02.947243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.691 [2024-10-15 04:44:02.947261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.691 [2024-10-15 04:44:02.947272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-10-15 04:44:03.048878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.691 [2024-10-15 04:44:03.048931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.691 [2024-10-15 04:44:03.048951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.691 [2024-10-15 04:44:03.048962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-10-15 04:44:03.049076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.691 [2024-10-15 04:44:03.049094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.691 [2024-10-15 04:44:03.049114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.691 [2024-10-15 04:44:03.049125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-10-15 04:44:03.049159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.691 [2024-10-15 04:44:03.049171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.691 [2024-10-15 04:44:03.049185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.691 [2024-10-15 04:44:03.049196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-10-15 04:44:03.049327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.691 [2024-10-15 04:44:03.049341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.691 [2024-10-15 04:44:03.049362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.691 [2024-10-15 04:44:03.049373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-10-15 04:44:03.049420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.691 [2024-10-15 04:44:03.049432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:13.691 [2024-10-15 04:44:03.049448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.691 [2024-10-15 04:44:03.049459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-10-15 04:44:03.049503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.691 [2024-10-15 04:44:03.049515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.691 [2024-10-15 04:44:03.049540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.691 [2024-10-15 04:44:03.049551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-10-15 04:44:03.049599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.691 [2024-10-15 04:44:03.049615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.691 [2024-10-15 04:44:03.049630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.691 [2024-10-15 04:44:03.049640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.691 [2024-10-15 04:44:03.049787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 419.107 ms, result 0 00:20:14.647 04:44:04 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:14.906 [2024-10-15 04:44:04.188484] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:20:14.906 [2024-10-15 04:44:04.188613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76316 ] 00:20:14.906 [2024-10-15 04:44:04.358045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.204 [2024-10-15 04:44:04.477511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.466 [2024-10-15 04:44:04.857682] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:15.466 [2024-10-15 04:44:04.857762] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:15.725 [2024-10-15 04:44:05.020721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.725 [2024-10-15 04:44:05.020777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:15.725 [2024-10-15 04:44:05.020793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:15.725 [2024-10-15 04:44:05.020804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.725 [2024-10-15 04:44:05.023975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.725 [2024-10-15 04:44:05.024009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.726 [2024-10-15 04:44:05.024022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.144 ms 00:20:15.726 [2024-10-15 04:44:05.024032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.726 [2024-10-15 04:44:05.024133] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:15.726 [2024-10-15 04:44:05.025195] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:15.726 [2024-10-15 04:44:05.025231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.726 [2024-10-15 04:44:05.025242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.726 [2024-10-15 04:44:05.025253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:20:15.726 [2024-10-15 04:44:05.025263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.726 [2024-10-15 04:44:05.026746] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:15.726 [2024-10-15 04:44:05.047677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.726 [2024-10-15 04:44:05.047721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:15.726 [2024-10-15 04:44:05.047741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.965 ms 00:20:15.726 [2024-10-15 04:44:05.047752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.726 [2024-10-15 04:44:05.047874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.726 [2024-10-15 04:44:05.047889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:15.726 [2024-10-15 04:44:05.047900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:15.726 [2024-10-15 04:44:05.047910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.726 [2024-10-15 04:44:05.054920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.726 [2024-10-15 04:44:05.054957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.726 [2024-10-15 04:44:05.054970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.976 ms 00:20:15.726 [2024-10-15 04:44:05.054980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.726 [2024-10-15 04:44:05.055085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.726 [2024-10-15 04:44:05.055101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.726 [2024-10-15 04:44:05.055112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:15.726 [2024-10-15 04:44:05.055122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.726 [2024-10-15 04:44:05.055154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.726 [2024-10-15 04:44:05.055166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:15.726 [2024-10-15 04:44:05.055180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:15.726 [2024-10-15 04:44:05.055190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.726 [2024-10-15 04:44:05.055216] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:15.726 [2024-10-15 04:44:05.059864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.726 [2024-10-15 04:44:05.059893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.726 [2024-10-15 04:44:05.059905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.664 ms 00:20:15.726 [2024-10-15 04:44:05.059915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.726 [2024-10-15 04:44:05.059986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.726 [2024-10-15 04:44:05.059998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:15.726 [2024-10-15 04:44:05.060009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:15.726 [2024-10-15 04:44:05.060019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.726 [2024-10-15 04:44:05.060043] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:15.726 [2024-10-15 04:44:05.060066] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:15.726 [2024-10-15 04:44:05.060103] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:15.726 [2024-10-15 04:44:05.060120] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:15.726 [2024-10-15 04:44:05.060209] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:15.726 [2024-10-15 04:44:05.060222] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:15.726 [2024-10-15 04:44:05.060235] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:15.726 [2024-10-15 04:44:05.060248] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:15.726 [2024-10-15 04:44:05.060259] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:15.726 [2024-10-15 04:44:05.060270] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:15.726 [2024-10-15 04:44:05.060284] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:15.726 [2024-10-15 04:44:05.060294] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:15.726 [2024-10-15 04:44:05.060304] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:15.726 [2024-10-15 04:44:05.060314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.726 [2024-10-15 04:44:05.060324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:15.726 [2024-10-15 04:44:05.060335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:20:15.726 [2024-10-15 04:44:05.060344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.726 [2024-10-15 04:44:05.060421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.726 [2024-10-15 04:44:05.060432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:15.726 [2024-10-15 04:44:05.060442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:15.726 [2024-10-15 04:44:05.060455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.726 [2024-10-15 04:44:05.060542] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:15.726 [2024-10-15 04:44:05.060554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:15.726 [2024-10-15 04:44:05.060564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.726 [2024-10-15 04:44:05.060574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:15.726 [2024-10-15 04:44:05.060595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:15.726 [2024-10-15 04:44:05.060614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:15.726 [2024-10-15 04:44:05.060623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.726 [2024-10-15 04:44:05.060641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:15.726 [2024-10-15 04:44:05.060650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:15.726 [2024-10-15 04:44:05.060660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.726 [2024-10-15 04:44:05.060680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:15.726 [2024-10-15 04:44:05.060689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:15.726 [2024-10-15 04:44:05.060698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:15.726 [2024-10-15 04:44:05.060717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:15.726 [2024-10-15 04:44:05.060726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:15.726 [2024-10-15 04:44:05.060744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.726 [2024-10-15 04:44:05.060762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:15.726 [2024-10-15 04:44:05.060771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.726 [2024-10-15 04:44:05.060789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:15.726 [2024-10-15 04:44:05.060798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.726 [2024-10-15 04:44:05.060828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:15.726 [2024-10-15 04:44:05.060838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.726 [2024-10-15 04:44:05.060856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:15.726 [2024-10-15 04:44:05.060865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.726 [2024-10-15 04:44:05.060883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:15.726 [2024-10-15 04:44:05.060892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:15.726 [2024-10-15 04:44:05.060902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.726 [2024-10-15 04:44:05.060912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:15.726 [2024-10-15 04:44:05.060921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:15.726 [2024-10-15 04:44:05.060929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:15.726 [2024-10-15 04:44:05.060947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:15.726 [2024-10-15 04:44:05.060956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.726 [2024-10-15 04:44:05.060964] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:15.726 [2024-10-15 04:44:05.060974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:15.726 [2024-10-15 04:44:05.060984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.726 [2024-10-15 04:44:05.060994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.726 [2024-10-15 04:44:05.061004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:15.726 [2024-10-15 04:44:05.061013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:15.726 [2024-10-15 04:44:05.061023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:15.726 [2024-10-15 04:44:05.061032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:15.726 [2024-10-15 04:44:05.061041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:15.727 [2024-10-15 04:44:05.061050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:15.727 [2024-10-15 04:44:05.061060] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:15.727 [2024-10-15 04:44:05.061076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.727 [2024-10-15 04:44:05.061086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:15.727 [2024-10-15 04:44:05.061097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:15.727 [2024-10-15 04:44:05.061107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:15.727 [2024-10-15 04:44:05.061118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:15.727 [2024-10-15 04:44:05.061128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:15.727 [2024-10-15 04:44:05.061138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:15.727 [2024-10-15 04:44:05.061148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:15.727 [2024-10-15 04:44:05.061158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:15.727 [2024-10-15 04:44:05.061168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:15.727 [2024-10-15 04:44:05.061178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:15.727 [2024-10-15 04:44:05.061188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:15.727 [2024-10-15 04:44:05.061198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:15.727 [2024-10-15 04:44:05.061208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:15.727 [2024-10-15 04:44:05.061232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:15.727 [2024-10-15 04:44:05.061243] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:15.727 [2024-10-15 04:44:05.061254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.727 [2024-10-15 04:44:05.061265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:15.727 [2024-10-15 04:44:05.061275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:15.727 [2024-10-15 04:44:05.061285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:15.727 [2024-10-15 04:44:05.061296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:15.727 [2024-10-15 04:44:05.061307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.727 [2024-10-15 04:44:05.061317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:15.727 [2024-10-15 04:44:05.061327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:20:15.727 [2024-10-15 04:44:05.061340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.727 [2024-10-15 04:44:05.101067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.727 [2024-10-15 04:44:05.101112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.727 [2024-10-15 04:44:05.101128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.734 ms 00:20:15.727 [2024-10-15 04:44:05.101139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.727 [2024-10-15 04:44:05.101298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.727 [2024-10-15 04:44:05.101312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:15.727 [2024-10-15 04:44:05.101328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:15.727 [2024-10-15 04:44:05.101338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.727 [2024-10-15 04:44:05.155570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.727 [2024-10-15 04:44:05.155632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.727 [2024-10-15 04:44:05.155647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.294 ms 00:20:15.727 [2024-10-15 04:44:05.155658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.727 [2024-10-15 04:44:05.155785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.727 [2024-10-15 04:44:05.155797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.727 [2024-10-15 04:44:05.155809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:15.727 [2024-10-15 04:44:05.155830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.727 [2024-10-15 04:44:05.156263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.727 [2024-10-15 04:44:05.156281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.727 [2024-10-15 04:44:05.156292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:20:15.727 [2024-10-15 04:44:05.156302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.727 [2024-10-15 04:44:05.156425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.727 [2024-10-15 04:44:05.156442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.727 [2024-10-15 04:44:05.156453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:15.727 [2024-10-15 04:44:05.156463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.727 [2024-10-15 04:44:05.175037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.727 [2024-10-15 04:44:05.175079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.727 [2024-10-15 04:44:05.175094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.581 ms 00:20:15.727 [2024-10-15 04:44:05.175105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.727 [2024-10-15 04:44:05.194489] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:15.727 [2024-10-15 04:44:05.194533] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:15.727 [2024-10-15 04:44:05.194548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.727 [2024-10-15 04:44:05.194560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:15.727 [2024-10-15 04:44:05.194572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.337 ms 00:20:15.727 [2024-10-15 04:44:05.194582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.727 [2024-10-15 04:44:05.224557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.727 [2024-10-15 04:44:05.224616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:15.727 [2024-10-15 04:44:05.224632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.928 ms 00:20:15.727 [2024-10-15 04:44:05.224643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.986 [2024-10-15 04:44:05.243290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.986 [2024-10-15 04:44:05.243330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:15.986 [2024-10-15 04:44:05.243345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.575 ms 00:20:15.986 [2024-10-15 04:44:05.243355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.986 [2024-10-15 04:44:05.261241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.986 [2024-10-15 04:44:05.261294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:15.986 [2024-10-15 04:44:05.261309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.828 ms 00:20:15.986 [2024-10-15 04:44:05.261319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.986 [2024-10-15 04:44:05.262149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.986 [2024-10-15 04:44:05.262174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:15.986 [2024-10-15 04:44:05.262186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:20:15.986 [2024-10-15 04:44:05.262195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.986 [2024-10-15 04:44:05.348617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.986 [2024-10-15 04:44:05.348685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:15.987 [2024-10-15 04:44:05.348702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.532 ms 00:20:15.987 [2024-10-15 04:44:05.348715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.987 [2024-10-15 04:44:05.362669] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:15.987 [2024-10-15 04:44:05.380289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.987 [2024-10-15 04:44:05.380356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:15.987 [2024-10-15 04:44:05.380375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.459 ms 00:20:15.987 [2024-10-15 04:44:05.380386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.987 [2024-10-15 04:44:05.380537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.987 [2024-10-15 04:44:05.380551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:15.987 [2024-10-15 04:44:05.380563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:15.987 [2024-10-15 04:44:05.380573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.987 [2024-10-15 04:44:05.380629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.987 [2024-10-15 04:44:05.380641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:15.987 [2024-10-15 04:44:05.380664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:15.987 [2024-10-15 04:44:05.380675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.987 [2024-10-15 04:44:05.380730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.987 [2024-10-15 04:44:05.380748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:15.987 [2024-10-15 04:44:05.380760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:15.987 [2024-10-15 04:44:05.380770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.987 [2024-10-15 04:44:05.380806] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:15.987 [2024-10-15 04:44:05.380819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.987 [2024-10-15 04:44:05.380844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:15.987 [2024-10-15 04:44:05.380855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:15.987 [2024-10-15 04:44:05.380866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.987 [2024-10-15 04:44:05.420740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.987 [2024-10-15 04:44:05.420802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:15.987 [2024-10-15 04:44:05.420826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.907 ms 00:20:15.987 [2024-10-15 04:44:05.420837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.987 [2024-10-15 04:44:05.421019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.987 [2024-10-15 04:44:05.421034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:15.987 [2024-10-15 04:44:05.421058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:15.987 [2024-10-15 04:44:05.421068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.987 [2024-10-15 04:44:05.422111] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:15.987 [2024-10-15 04:44:05.427329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.664 ms, result 0 00:20:15.987 [2024-10-15 04:44:05.428277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.987 [2024-10-15 04:44:05.447324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.364  [2024-10-15T04:44:07.804Z] Copying: 34/256 [MB] (34 MBps) [2024-10-15T04:44:08.739Z] Copying: 67/256 [MB] (33 MBps) [2024-10-15T04:44:09.675Z] Copying: 97/256 [MB] (29 MBps) [2024-10-15T04:44:10.610Z] Copying: 128/256 [MB] (31 MBps) [2024-10-15T04:44:11.544Z] Copying: 156/256 [MB] (27 MBps) [2024-10-15T04:44:12.532Z] Copying: 184/256 [MB] (28 MBps) [2024-10-15T04:44:13.910Z] Copying: 213/256 [MB] (29 MBps) [2024-10-15T04:44:14.170Z] Copying: 243/256 [MB] (29 MBps) [2024-10-15T04:44:14.429Z] Copying: 256/256 [MB] (average 30 MBps)[2024-10-15 04:44:14.353933] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:24.925 [2024-10-15 04:44:14.380244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.925 [2024-10-15 04:44:14.380345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:24.925 [2024-10-15 04:44:14.380376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:24.925 [2024-10-15 04:44:14.380396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.925 [2024-10-15 04:44:14.380461] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:24.925 [2024-10-15 04:44:14.384833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.925 [2024-10-15 04:44:14.384903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:24.925 [2024-10-15 04:44:14.384923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.328 ms 00:20:24.925 [2024-10-15 04:44:14.384936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.925 [2024-10-15 04:44:14.385296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.925 [2024-10-15 04:44:14.385325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:24.925 [2024-10-15 04:44:14.385342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:20:24.925 [2024-10-15 04:44:14.385356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.925 [2024-10-15 04:44:14.388775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.925 [2024-10-15 04:44:14.388844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:24.925 [2024-10-15 04:44:14.388873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.395 ms 00:20:24.925 [2024-10-15 04:44:14.388887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.925 [2024-10-15 04:44:14.396110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.925 [2024-10-15 04:44:14.396188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:24.925 [2024-10-15 04:44:14.396206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.189 ms 00:20:24.925 [2024-10-15 04:44:14.396222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.187 [2024-10-15 04:44:14.436097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.187 [2024-10-15 04:44:14.436167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:25.187 [2024-10-15 04:44:14.436183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.807 ms 00:20:25.187 [2024-10-15 04:44:14.436194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.187 [2024-10-15 04:44:14.458924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.187 [2024-10-15 04:44:14.458997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:25.187 [2024-10-15 04:44:14.459026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.660 ms 00:20:25.187 [2024-10-15 04:44:14.459040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.187 [2024-10-15 04:44:14.459239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.187 [2024-10-15 04:44:14.459254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:25.187 [2024-10-15 04:44:14.459265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:25.187 [2024-10-15 04:44:14.459275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.187 [2024-10-15 04:44:14.499105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.187 [2024-10-15 04:44:14.499177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:25.187 [2024-10-15 04:44:14.499193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.856 ms 00:20:25.187 [2024-10-15 04:44:14.499204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.187 [2024-10-15 04:44:14.539435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.187 [2024-10-15 04:44:14.539506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:25.187 [2024-10-15 04:44:14.539522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.190 ms 00:20:25.187 [2024-10-15 04:44:14.539532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.187 [2024-10-15 04:44:14.578758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.187 [2024-10-15 04:44:14.578839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:25.187 [2024-10-15 04:44:14.578856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.189 ms 00:20:25.187 [2024-10-15 04:44:14.578866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.187 [2024-10-15 04:44:14.619523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.187 [2024-10-15 04:44:14.619611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:25.187 [2024-10-15 04:44:14.619628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.592 ms 00:20:25.187 [2024-10-15 04:44:14.619639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.187 [2024-10-15 04:44:14.619740] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:25.187 [2024-10-15 04:44:14.619770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.619997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:25.187 [2024-10-15 04:44:14.620189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:25.188 [2024-10-15 04:44:14.620868] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:25.188 [2024-10-15 04:44:14.620878] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88ef924c-9641-4650-a021-afc46cb8fbb7 00:20:25.188 [2024-10-15 04:44:14.620889] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:25.188 [2024-10-15 04:44:14.620899] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:25.188 [2024-10-15 04:44:14.620909] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:25.188 [2024-10-15 04:44:14.620920] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:25.188 [2024-10-15 04:44:14.620930] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:25.188 [2024-10-15 04:44:14.620940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:25.188 [2024-10-15 04:44:14.620950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:25.188 [2024-10-15 04:44:14.620959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:25.188 [2024-10-15 04:44:14.620968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:25.188 [2024-10-15 04:44:14.620978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.188 [2024-10-15 04:44:14.620988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:25.188 [2024-10-15 04:44:14.621003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:20:25.188 [2024-10-15 04:44:14.621013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.188 [2024-10-15 04:44:14.642729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.188 [2024-10-15 04:44:14.642797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:25.188 [2024-10-15 04:44:14.642843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.704 ms 00:20:25.188 [2024-10-15 04:44:14.642855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.188 [2024-10-15 04:44:14.643537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.188 [2024-10-15 04:44:14.643568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:25.188 [2024-10-15 04:44:14.643581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:20:25.188 [2024-10-15 04:44:14.643592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.703105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.703171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:25.448 [2024-10-15 04:44:14.703205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.703216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.703356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.703375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:25.448 [2024-10-15 04:44:14.703386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.703401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.703475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.703488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:25.448 [2024-10-15 04:44:14.703499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.703509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.703545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.703556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:25.448 [2024-10-15 04:44:14.703570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.703581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.834093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.834188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:25.448 [2024-10-15 04:44:14.834205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.834217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.946502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.946599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:25.448 [2024-10-15 04:44:14.946616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.946643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.946738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.946751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.448 [2024-10-15 04:44:14.946762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.946773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.946805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.946817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.448 [2024-10-15 04:44:14.946828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.946863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.946975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.946991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.448 [2024-10-15 04:44:14.947003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.947014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.947051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.947064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:25.448 [2024-10-15 04:44:14.947075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.947085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.947132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.947144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.448 [2024-10-15 04:44:14.947155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.947166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.947211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.448 [2024-10-15 04:44:14.947223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.448 [2024-10-15 04:44:14.947234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.448 [2024-10-15 04:44:14.947249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.448 [2024-10-15 04:44:14.947395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 568.119 ms, result 0 00:20:26.827 00:20:26.827 00:20:26.827 04:44:16 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:27.086 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:27.086 04:44:16 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:27.086 04:44:16 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:27.086 04:44:16 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:27.086 04:44:16 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:27.086 04:44:16 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:27.086 04:44:16 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:27.346 04:44:16 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76247 00:20:27.346 04:44:16 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76247 ']' 00:20:27.346 Process with pid 76247 is not found 00:20:27.346 04:44:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76247 00:20:27.346 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76247) - No such process 00:20:27.346 04:44:16 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 76247 is not found' 00:20:27.346 00:20:27.346 real 1m6.044s 00:20:27.346 user 1m31.840s 00:20:27.346 sys 0m6.917s 00:20:27.346 04:44:16 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:27.346 04:44:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:27.346 ************************************ 00:20:27.346 END TEST ftl_trim 00:20:27.346 ************************************ 00:20:27.346 04:44:16 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:27.346 04:44:16 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:27.346 04:44:16 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:27.346 04:44:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:27.346 ************************************ 00:20:27.346 START TEST ftl_restore 00:20:27.346 ************************************ 00:20:27.346 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:27.346 * Looking for test storage... 00:20:27.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:27.346 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:27.346 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:20:27.346 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:27.605 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.605 04:44:16 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:27.605 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.605 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:27.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.605 --rc genhtml_branch_coverage=1 00:20:27.605 --rc genhtml_function_coverage=1 00:20:27.605 --rc genhtml_legend=1 00:20:27.605 --rc geninfo_all_blocks=1 00:20:27.605 --rc geninfo_unexecuted_blocks=1 00:20:27.605 00:20:27.605 ' 00:20:27.605 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:27.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.605 --rc genhtml_branch_coverage=1 00:20:27.605 --rc genhtml_function_coverage=1 00:20:27.605 --rc genhtml_legend=1 00:20:27.605 --rc geninfo_all_blocks=1 00:20:27.605 --rc geninfo_unexecuted_blocks=1 00:20:27.605 00:20:27.605 ' 00:20:27.605 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:27.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.605 --rc genhtml_branch_coverage=1 00:20:27.605 --rc genhtml_function_coverage=1 00:20:27.605 --rc genhtml_legend=1 00:20:27.605 --rc geninfo_all_blocks=1 00:20:27.605 --rc geninfo_unexecuted_blocks=1 00:20:27.605 00:20:27.605 ' 00:20:27.605 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:27.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.606 --rc genhtml_branch_coverage=1 00:20:27.606 --rc genhtml_function_coverage=1 00:20:27.606 --rc genhtml_legend=1 00:20:27.606 --rc geninfo_all_blocks=1 00:20:27.606 --rc geninfo_unexecuted_blocks=1 00:20:27.606 00:20:27.606 ' 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.diqnPA6suA 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76522 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76522 00:20:27.606 04:44:16 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:27.606 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 76522 ']' 00:20:27.606 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.606 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.606 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.606 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.606 04:44:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:27.606 [2024-10-15 04:44:17.095729] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:20:27.606 [2024-10-15 04:44:17.095876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76522 ] 00:20:27.865 [2024-10-15 04:44:17.259847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.124 [2024-10-15 04:44:17.381044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.057 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.057 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:20:29.057 04:44:18 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:29.057 04:44:18 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:29.057 04:44:18 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:29.057 04:44:18 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:29.057 04:44:18 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:29.057 04:44:18 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:29.057 04:44:18 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:29.057 04:44:18 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:29.057 04:44:18 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:29.057 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:29.057 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:29.057 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:29.057 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:29.316 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:29.316 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:29.316 { 00:20:29.316 "name": "nvme0n1", 00:20:29.316 "aliases": [ 00:20:29.316 "bd035d96-0183-46a8-9217-0fd2093a0cfd" 00:20:29.316 ], 00:20:29.316 "product_name": "NVMe disk", 00:20:29.316 "block_size": 4096, 00:20:29.316 "num_blocks": 1310720, 00:20:29.316 "uuid": "bd035d96-0183-46a8-9217-0fd2093a0cfd", 00:20:29.316 "numa_id": -1, 00:20:29.316 "assigned_rate_limits": { 00:20:29.316 "rw_ios_per_sec": 0, 00:20:29.316 "rw_mbytes_per_sec": 0, 00:20:29.316 "r_mbytes_per_sec": 0, 00:20:29.316 "w_mbytes_per_sec": 0 00:20:29.316 }, 00:20:29.316 "claimed": true, 00:20:29.316 "claim_type": "read_many_write_one", 00:20:29.316 "zoned": false, 00:20:29.316 "supported_io_types": { 00:20:29.316 "read": true, 00:20:29.316 "write": true, 00:20:29.316 "unmap": true, 00:20:29.316 "flush": true, 00:20:29.316 "reset": true, 00:20:29.316 "nvme_admin": true, 00:20:29.316 "nvme_io": true, 00:20:29.316 "nvme_io_md": false, 00:20:29.316 "write_zeroes": true, 00:20:29.316 "zcopy": false, 00:20:29.316 "get_zone_info": false, 00:20:29.316 "zone_management": false, 00:20:29.316 "zone_append": false, 00:20:29.316 "compare": true, 00:20:29.316 "compare_and_write": false, 00:20:29.316 "abort": true, 00:20:29.316 "seek_hole": false, 00:20:29.316 "seek_data": false, 00:20:29.316 "copy": true, 00:20:29.316 "nvme_iov_md": false 00:20:29.316 }, 00:20:29.316 "driver_specific": { 00:20:29.316 "nvme": [ 00:20:29.316 { 00:20:29.316 "pci_address": "0000:00:11.0", 00:20:29.316 "trid": { 00:20:29.316 "trtype": "PCIe", 00:20:29.316 "traddr": "0000:00:11.0" 00:20:29.316 }, 00:20:29.316 "ctrlr_data": { 00:20:29.316 "cntlid": 0, 00:20:29.316 "vendor_id": "0x1b36", 00:20:29.316 "model_number": "QEMU NVMe Ctrl", 00:20:29.316 "serial_number": "12341", 00:20:29.316 "firmware_revision": "8.0.0", 00:20:29.316 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:29.316 "oacs": { 00:20:29.316 "security": 0, 00:20:29.316 "format": 1, 00:20:29.316 "firmware": 0, 00:20:29.316 "ns_manage": 1 00:20:29.316 }, 00:20:29.316 "multi_ctrlr": false, 00:20:29.316 "ana_reporting": false 00:20:29.316 }, 00:20:29.316 "vs": { 00:20:29.316 "nvme_version": "1.4" 00:20:29.316 }, 00:20:29.316 "ns_data": { 00:20:29.316 "id": 1, 00:20:29.316 "can_share": false 00:20:29.316 } 00:20:29.316 } 00:20:29.316 ], 00:20:29.316 "mp_policy": "active_passive" 00:20:29.316 } 00:20:29.316 } 00:20:29.316 ]' 00:20:29.316 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:29.574 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:29.574 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:29.575 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:29.575 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:29.575 04:44:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:20:29.575 04:44:18 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:29.575 04:44:18 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:29.575 04:44:18 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:29.575 04:44:18 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:29.575 04:44:18 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:29.833 04:44:19 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=5e683ac7-03d8-4e29-b8d5-6aa9a4d8f12e 00:20:29.833 04:44:19 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:29.833 04:44:19 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5e683ac7-03d8-4e29-b8d5-6aa9a4d8f12e 00:20:29.833 04:44:19 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:30.091 04:44:19 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=297b7b3e-1d3c-4a72-89ce-7223e976ccdc 00:20:30.091 04:44:19 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 297b7b3e-1d3c-4a72-89ce-7223e976ccdc 00:20:30.349 04:44:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:30.349 04:44:19 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:30.349 04:44:19 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:30.349 04:44:19 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:30.349 04:44:19 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:30.349 04:44:19 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:30.349 04:44:19 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:30.349 04:44:19 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:30.349 04:44:19 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:30.349 04:44:19 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:30.349 04:44:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:30.349 04:44:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:30.349 04:44:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:30.608 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:30.608 { 00:20:30.608 "name": "e75787c1-0b7b-47f4-a929-22c7bb8e0e2a", 00:20:30.608 "aliases": [ 00:20:30.608 "lvs/nvme0n1p0" 00:20:30.608 ], 00:20:30.608 "product_name": "Logical Volume", 00:20:30.608 "block_size": 4096, 00:20:30.608 "num_blocks": 26476544, 00:20:30.608 "uuid": "e75787c1-0b7b-47f4-a929-22c7bb8e0e2a", 00:20:30.608 "assigned_rate_limits": { 00:20:30.608 "rw_ios_per_sec": 0, 00:20:30.608 "rw_mbytes_per_sec": 0, 00:20:30.608 "r_mbytes_per_sec": 0, 00:20:30.608 "w_mbytes_per_sec": 0 00:20:30.608 }, 00:20:30.608 "claimed": false, 00:20:30.608 "zoned": false, 00:20:30.608 "supported_io_types": { 00:20:30.608 "read": true, 00:20:30.608 "write": true, 00:20:30.608 "unmap": true, 00:20:30.608 "flush": false, 00:20:30.608 "reset": true, 00:20:30.608 "nvme_admin": false, 00:20:30.608 "nvme_io": false, 00:20:30.608 "nvme_io_md": false, 00:20:30.608 "write_zeroes": true, 00:20:30.608 "zcopy": false, 00:20:30.608 "get_zone_info": false, 00:20:30.608 "zone_management": false, 00:20:30.608 "zone_append": false, 00:20:30.608 "compare": false, 00:20:30.608 "compare_and_write": false, 00:20:30.608 "abort": false, 00:20:30.608 "seek_hole": true, 00:20:30.608 "seek_data": true, 00:20:30.608 "copy": false, 00:20:30.608 "nvme_iov_md": false 00:20:30.608 }, 00:20:30.608 "driver_specific": { 00:20:30.608 "lvol": { 00:20:30.608 "lvol_store_uuid": "297b7b3e-1d3c-4a72-89ce-7223e976ccdc", 00:20:30.608 "base_bdev": "nvme0n1", 00:20:30.608 "thin_provision": true, 00:20:30.608 "num_allocated_clusters": 0, 00:20:30.608 "snapshot": false, 00:20:30.608 "clone": false, 00:20:30.608 "esnap_clone": false 00:20:30.608 } 00:20:30.608 } 00:20:30.608 } 00:20:30.608 ]' 00:20:30.608 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:30.608 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:30.608 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:30.866 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:30.866 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:30.866 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:30.866 04:44:20 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:30.867 04:44:20 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:30.867 04:44:20 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:31.127 04:44:20 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:31.127 04:44:20 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:31.127 04:44:20 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:31.127 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:31.127 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:31.127 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:31.127 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:31.127 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:31.389 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:31.389 { 00:20:31.389 "name": "e75787c1-0b7b-47f4-a929-22c7bb8e0e2a", 00:20:31.389 "aliases": [ 00:20:31.389 "lvs/nvme0n1p0" 00:20:31.389 ], 00:20:31.389 "product_name": "Logical Volume", 00:20:31.389 "block_size": 4096, 00:20:31.389 "num_blocks": 26476544, 00:20:31.389 "uuid": "e75787c1-0b7b-47f4-a929-22c7bb8e0e2a", 00:20:31.389 "assigned_rate_limits": { 00:20:31.389 "rw_ios_per_sec": 0, 00:20:31.389 "rw_mbytes_per_sec": 0, 00:20:31.389 "r_mbytes_per_sec": 0, 00:20:31.389 "w_mbytes_per_sec": 0 00:20:31.389 }, 00:20:31.389 "claimed": false, 00:20:31.389 "zoned": false, 00:20:31.389 "supported_io_types": { 00:20:31.389 "read": true, 00:20:31.389 "write": true, 00:20:31.389 "unmap": true, 00:20:31.389 "flush": false, 00:20:31.389 "reset": true, 00:20:31.389 "nvme_admin": false, 00:20:31.389 "nvme_io": false, 00:20:31.389 "nvme_io_md": false, 00:20:31.389 "write_zeroes": true, 00:20:31.389 "zcopy": false, 00:20:31.389 "get_zone_info": false, 00:20:31.389 "zone_management": false, 00:20:31.389 "zone_append": false, 00:20:31.389 "compare": false, 00:20:31.389 "compare_and_write": false, 00:20:31.389 "abort": false, 00:20:31.389 "seek_hole": true, 00:20:31.389 "seek_data": true, 00:20:31.389 "copy": false, 00:20:31.389 "nvme_iov_md": false 00:20:31.389 }, 00:20:31.389 "driver_specific": { 00:20:31.389 "lvol": { 00:20:31.389 "lvol_store_uuid": "297b7b3e-1d3c-4a72-89ce-7223e976ccdc", 00:20:31.389 "base_bdev": "nvme0n1", 00:20:31.389 "thin_provision": true, 00:20:31.389 "num_allocated_clusters": 0, 00:20:31.389 "snapshot": false, 00:20:31.389 "clone": false, 00:20:31.389 "esnap_clone": false 00:20:31.389 } 00:20:31.389 } 00:20:31.389 } 00:20:31.389 ]' 00:20:31.389 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:31.389 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:31.389 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:31.389 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:31.389 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:31.389 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:31.389 04:44:20 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:31.389 04:44:20 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:31.648 04:44:20 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:31.648 04:44:20 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:31.648 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:31.649 04:44:20 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:31.649 04:44:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:31.649 04:44:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:31.649 04:44:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e75787c1-0b7b-47f4-a929-22c7bb8e0e2a 00:20:31.908 04:44:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:31.908 { 00:20:31.908 "name": "e75787c1-0b7b-47f4-a929-22c7bb8e0e2a", 00:20:31.908 "aliases": [ 00:20:31.908 "lvs/nvme0n1p0" 00:20:31.908 ], 00:20:31.908 "product_name": "Logical Volume", 00:20:31.908 "block_size": 4096, 00:20:31.908 "num_blocks": 26476544, 00:20:31.908 "uuid": "e75787c1-0b7b-47f4-a929-22c7bb8e0e2a", 00:20:31.908 "assigned_rate_limits": { 00:20:31.908 "rw_ios_per_sec": 0, 00:20:31.908 "rw_mbytes_per_sec": 0, 00:20:31.908 "r_mbytes_per_sec": 0, 00:20:31.908 "w_mbytes_per_sec": 0 00:20:31.908 }, 00:20:31.908 "claimed": false, 00:20:31.908 "zoned": false, 00:20:31.908 "supported_io_types": { 00:20:31.908 "read": true, 00:20:31.908 "write": true, 00:20:31.908 "unmap": true, 00:20:31.908 "flush": false, 00:20:31.908 "reset": true, 00:20:31.908 "nvme_admin": false, 00:20:31.908 "nvme_io": false, 00:20:31.908 "nvme_io_md": false, 00:20:31.908 "write_zeroes": true, 00:20:31.908 "zcopy": false, 00:20:31.908 "get_zone_info": false, 00:20:31.908 "zone_management": false, 00:20:31.908 "zone_append": false, 00:20:31.908 "compare": false, 00:20:31.908 "compare_and_write": false, 00:20:31.908 "abort": false, 00:20:31.908 "seek_hole": true, 00:20:31.908 "seek_data": true, 00:20:31.908 "copy": false, 00:20:31.908 "nvme_iov_md": false 00:20:31.908 }, 00:20:31.908 "driver_specific": { 00:20:31.908 "lvol": { 00:20:31.908 "lvol_store_uuid": "297b7b3e-1d3c-4a72-89ce-7223e976ccdc", 00:20:31.908 "base_bdev": "nvme0n1", 00:20:31.908 "thin_provision": true, 00:20:31.908 "num_allocated_clusters": 0, 00:20:31.908 "snapshot": false, 00:20:31.908 "clone": false, 00:20:31.908 "esnap_clone": false 00:20:31.908 } 00:20:31.908 } 00:20:31.908 } 00:20:31.908 ]' 00:20:31.908 04:44:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:31.908 04:44:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:31.908 04:44:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:31.908 04:44:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:31.908 04:44:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:31.908 04:44:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:31.908 04:44:21 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:31.908 04:44:21 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e75787c1-0b7b-47f4-a929-22c7bb8e0e2a --l2p_dram_limit 10' 00:20:31.908 04:44:21 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:31.908 04:44:21 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:31.908 04:44:21 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:31.908 04:44:21 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:31.908 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:31.908 04:44:21 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e75787c1-0b7b-47f4-a929-22c7bb8e0e2a --l2p_dram_limit 10 -c nvc0n1p0 00:20:32.169 [2024-10-15 04:44:21.544336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.169 [2024-10-15 04:44:21.544397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:32.169 [2024-10-15 04:44:21.544418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:32.169 [2024-10-15 04:44:21.544429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.169 [2024-10-15 04:44:21.544515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.169 [2024-10-15 04:44:21.544533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.169 [2024-10-15 04:44:21.544547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:32.169 [2024-10-15 04:44:21.544558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.169 [2024-10-15 04:44:21.544583] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:32.169 [2024-10-15 04:44:21.545658] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:32.169 [2024-10-15 04:44:21.545696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.169 [2024-10-15 04:44:21.545708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.169 [2024-10-15 04:44:21.545722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:20:32.169 [2024-10-15 04:44:21.545733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.169 [2024-10-15 04:44:21.545833] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ecc6ff85-7abb-45d9-8cbd-e91e1968175e 00:20:32.169 [2024-10-15 04:44:21.547279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.169 [2024-10-15 04:44:21.547314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:32.169 [2024-10-15 04:44:21.547327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:32.169 [2024-10-15 04:44:21.547342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.169 [2024-10-15 04:44:21.554824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.169 [2024-10-15 04:44:21.554866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.169 [2024-10-15 04:44:21.554880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.452 ms 00:20:32.169 [2024-10-15 04:44:21.554898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.169 [2024-10-15 04:44:21.555016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.169 [2024-10-15 04:44:21.555035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.169 [2024-10-15 04:44:21.555048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:32.169 [2024-10-15 04:44:21.555066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.169 [2024-10-15 04:44:21.555151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.169 [2024-10-15 04:44:21.555172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:32.169 [2024-10-15 04:44:21.555184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:32.169 [2024-10-15 04:44:21.555213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.169 [2024-10-15 04:44:21.555243] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.169 [2024-10-15 04:44:21.560595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.169 [2024-10-15 04:44:21.560635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.169 [2024-10-15 04:44:21.560651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.367 ms 00:20:32.169 [2024-10-15 04:44:21.560666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.169 [2024-10-15 04:44:21.560706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.169 [2024-10-15 04:44:21.560717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:32.169 [2024-10-15 04:44:21.560730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:32.169 [2024-10-15 04:44:21.560740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.169 [2024-10-15 04:44:21.560784] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:32.169 [2024-10-15 04:44:21.560924] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:32.169 [2024-10-15 04:44:21.560959] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:32.169 [2024-10-15 04:44:21.560973] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:32.169 [2024-10-15 04:44:21.560989] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:32.169 [2024-10-15 04:44:21.561001] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:32.169 [2024-10-15 04:44:21.561015] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:32.169 [2024-10-15 04:44:21.561026] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:32.169 [2024-10-15 04:44:21.561038] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:32.169 [2024-10-15 04:44:21.561048] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:32.169 [2024-10-15 04:44:21.561064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.169 [2024-10-15 04:44:21.561075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:32.169 [2024-10-15 04:44:21.561088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:32.169 [2024-10-15 04:44:21.561108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.169 [2024-10-15 04:44:21.561206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.169 [2024-10-15 04:44:21.561218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:32.169 [2024-10-15 04:44:21.561242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:32.169 [2024-10-15 04:44:21.561252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.169 [2024-10-15 04:44:21.561363] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:32.169 [2024-10-15 04:44:21.561379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:32.169 [2024-10-15 04:44:21.561394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.169 [2024-10-15 04:44:21.561405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.169 [2024-10-15 04:44:21.561419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:32.169 [2024-10-15 04:44:21.561429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:32.169 [2024-10-15 04:44:21.561441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:32.169 [2024-10-15 04:44:21.561451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:32.169 [2024-10-15 04:44:21.561464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:32.169 [2024-10-15 04:44:21.561474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.169 [2024-10-15 04:44:21.561488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:32.169 [2024-10-15 04:44:21.561499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:32.169 [2024-10-15 04:44:21.561511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.169 [2024-10-15 04:44:21.561522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:32.169 [2024-10-15 04:44:21.561534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:32.169 [2024-10-15 04:44:21.561544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.169 [2024-10-15 04:44:21.561563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:32.169 [2024-10-15 04:44:21.561573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:32.169 [2024-10-15 04:44:21.561585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.169 [2024-10-15 04:44:21.561596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:32.169 [2024-10-15 04:44:21.561611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:32.169 [2024-10-15 04:44:21.561621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.169 [2024-10-15 04:44:21.561633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:32.169 [2024-10-15 04:44:21.561643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:32.169 [2024-10-15 04:44:21.561656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.169 [2024-10-15 04:44:21.561666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:32.169 [2024-10-15 04:44:21.561679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:32.169 [2024-10-15 04:44:21.561689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.169 [2024-10-15 04:44:21.561701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:32.169 [2024-10-15 04:44:21.561711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:32.169 [2024-10-15 04:44:21.561723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.169 [2024-10-15 04:44:21.561733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:32.169 [2024-10-15 04:44:21.561748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:32.169 [2024-10-15 04:44:21.561758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.169 [2024-10-15 04:44:21.561771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:32.169 [2024-10-15 04:44:21.561781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:32.169 [2024-10-15 04:44:21.561793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.169 [2024-10-15 04:44:21.561803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:32.169 [2024-10-15 04:44:21.561816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:32.170 [2024-10-15 04:44:21.561826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.170 [2024-10-15 04:44:21.561850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:32.170 [2024-10-15 04:44:21.561860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:32.170 [2024-10-15 04:44:21.561872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.170 [2024-10-15 04:44:21.561882] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:32.170 [2024-10-15 04:44:21.561896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:32.170 [2024-10-15 04:44:21.561907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.170 [2024-10-15 04:44:21.561920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.170 [2024-10-15 04:44:21.561931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:32.170 [2024-10-15 04:44:21.561950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:32.170 [2024-10-15 04:44:21.561961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:32.170 [2024-10-15 04:44:21.561974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:32.170 [2024-10-15 04:44:21.561984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:32.170 [2024-10-15 04:44:21.561997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:32.170 [2024-10-15 04:44:21.562013] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:32.170 [2024-10-15 04:44:21.562030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.170 [2024-10-15 04:44:21.562042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:32.170 [2024-10-15 04:44:21.562057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:32.170 [2024-10-15 04:44:21.562068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:32.170 [2024-10-15 04:44:21.562082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:32.170 [2024-10-15 04:44:21.562093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:32.170 [2024-10-15 04:44:21.562119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:32.170 [2024-10-15 04:44:21.562130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:32.170 [2024-10-15 04:44:21.562143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:32.170 [2024-10-15 04:44:21.562154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:32.170 [2024-10-15 04:44:21.562170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:32.170 [2024-10-15 04:44:21.562180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:32.170 [2024-10-15 04:44:21.562193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:32.170 [2024-10-15 04:44:21.562204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:32.170 [2024-10-15 04:44:21.562217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:32.170 [2024-10-15 04:44:21.562228] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:32.170 [2024-10-15 04:44:21.562242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.170 [2024-10-15 04:44:21.562259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:32.170 [2024-10-15 04:44:21.562272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:32.170 [2024-10-15 04:44:21.562283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:32.170 [2024-10-15 04:44:21.562296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:32.170 [2024-10-15 04:44:21.562307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.170 [2024-10-15 04:44:21.562321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:32.170 [2024-10-15 04:44:21.562332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:20:32.170 [2024-10-15 04:44:21.562344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.170 [2024-10-15 04:44:21.562389] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:32.170 [2024-10-15 04:44:21.562413] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:36.362 [2024-10-15 04:44:25.466234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.466309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:36.362 [2024-10-15 04:44:25.466327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3910.180 ms 00:20:36.362 [2024-10-15 04:44:25.466340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.507540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.507600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.362 [2024-10-15 04:44:25.507615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.970 ms 00:20:36.362 [2024-10-15 04:44:25.507629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.507780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.507796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:36.362 [2024-10-15 04:44:25.507807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:36.362 [2024-10-15 04:44:25.507834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.557812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.557884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.362 [2024-10-15 04:44:25.557900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.997 ms 00:20:36.362 [2024-10-15 04:44:25.557913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.557962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.557978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.362 [2024-10-15 04:44:25.557989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:36.362 [2024-10-15 04:44:25.558004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.558489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.558521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.362 [2024-10-15 04:44:25.558534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:20:36.362 [2024-10-15 04:44:25.558547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.558651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.558664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.362 [2024-10-15 04:44:25.558675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:36.362 [2024-10-15 04:44:25.558690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.580396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.580455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.362 [2024-10-15 04:44:25.580471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.718 ms 00:20:36.362 [2024-10-15 04:44:25.580488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.603431] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:36.362 [2024-10-15 04:44:25.606630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.606668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:36.362 [2024-10-15 04:44:25.606687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.072 ms 00:20:36.362 [2024-10-15 04:44:25.606697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.698766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.698845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:36.362 [2024-10-15 04:44:25.698865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.167 ms 00:20:36.362 [2024-10-15 04:44:25.698876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.699071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.699084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:36.362 [2024-10-15 04:44:25.699101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:20:36.362 [2024-10-15 04:44:25.699114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.735669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.735727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:36.362 [2024-10-15 04:44:25.735746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.552 ms 00:20:36.362 [2024-10-15 04:44:25.735757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.771713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.771765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:36.362 [2024-10-15 04:44:25.771784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.961 ms 00:20:36.362 [2024-10-15 04:44:25.771794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.362 [2024-10-15 04:44:25.772544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.362 [2024-10-15 04:44:25.772573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:36.362 [2024-10-15 04:44:25.772590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:20:36.362 [2024-10-15 04:44:25.772602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.621 [2024-10-15 04:44:25.872970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.621 [2024-10-15 04:44:25.873035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:36.621 [2024-10-15 04:44:25.873064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.457 ms 00:20:36.621 [2024-10-15 04:44:25.873076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.621 [2024-10-15 04:44:25.912576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.621 [2024-10-15 04:44:25.912654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:36.621 [2024-10-15 04:44:25.912678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.468 ms 00:20:36.621 [2024-10-15 04:44:25.912689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.621 [2024-10-15 04:44:25.952936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.621 [2024-10-15 04:44:25.953020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:36.621 [2024-10-15 04:44:25.953042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.230 ms 00:20:36.621 [2024-10-15 04:44:25.953053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.621 [2024-10-15 04:44:25.992235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.621 [2024-10-15 04:44:25.992295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:36.621 [2024-10-15 04:44:25.992313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.167 ms 00:20:36.621 [2024-10-15 04:44:25.992324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.621 [2024-10-15 04:44:25.992386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.621 [2024-10-15 04:44:25.992399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:36.621 [2024-10-15 04:44:25.992416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:36.621 [2024-10-15 04:44:25.992426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.621 [2024-10-15 04:44:25.992543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.621 [2024-10-15 04:44:25.992555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:36.621 [2024-10-15 04:44:25.992568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:36.621 [2024-10-15 04:44:25.992578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.621 [2024-10-15 04:44:25.993687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4456.124 ms, result 0 00:20:36.621 { 00:20:36.621 "name": "ftl0", 00:20:36.621 "uuid": "ecc6ff85-7abb-45d9-8cbd-e91e1968175e" 00:20:36.621 } 00:20:36.621 04:44:26 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:36.621 04:44:26 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:36.884 04:44:26 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:36.884 04:44:26 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:37.144 [2024-10-15 04:44:26.428281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.428347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.144 [2024-10-15 04:44:26.428364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:37.144 [2024-10-15 04:44:26.428389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.428417] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.144 [2024-10-15 04:44:26.432692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.432729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.144 [2024-10-15 04:44:26.432746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.260 ms 00:20:37.144 [2024-10-15 04:44:26.432756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.433034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.433059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.144 [2024-10-15 04:44:26.433073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:20:37.144 [2024-10-15 04:44:26.433090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.435597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.435619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.144 [2024-10-15 04:44:26.435633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.492 ms 00:20:37.144 [2024-10-15 04:44:26.435644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.440662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.440697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.144 [2024-10-15 04:44:26.440711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.001 ms 00:20:37.144 [2024-10-15 04:44:26.440721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.477387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.477432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.144 [2024-10-15 04:44:26.477450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.648 ms 00:20:37.144 [2024-10-15 04:44:26.477460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.499741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.499794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.144 [2024-10-15 04:44:26.499811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.266 ms 00:20:37.144 [2024-10-15 04:44:26.499830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.499982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.499996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.144 [2024-10-15 04:44:26.500010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:20:37.144 [2024-10-15 04:44:26.500021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.536720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.536770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.144 [2024-10-15 04:44:26.536788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.735 ms 00:20:37.144 [2024-10-15 04:44:26.536798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.573401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.573460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.144 [2024-10-15 04:44:26.573478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.600 ms 00:20:37.144 [2024-10-15 04:44:26.573488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.609505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.609566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.144 [2024-10-15 04:44:26.609584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.018 ms 00:20:37.144 [2024-10-15 04:44:26.609594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.645523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.144 [2024-10-15 04:44:26.645581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.144 [2024-10-15 04:44:26.645599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.870 ms 00:20:37.144 [2024-10-15 04:44:26.645609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.144 [2024-10-15 04:44:26.645659] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.144 [2024-10-15 04:44:26.645678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.144 [2024-10-15 04:44:26.645693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.144 [2024-10-15 04:44:26.645705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.144 [2024-10-15 04:44:26.645718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.645999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.145 [2024-10-15 04:44:26.646764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.146 [2024-10-15 04:44:26.646923] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.146 [2024-10-15 04:44:26.646936] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ecc6ff85-7abb-45d9-8cbd-e91e1968175e 00:20:37.146 [2024-10-15 04:44:26.646949] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.146 [2024-10-15 04:44:26.646967] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.146 [2024-10-15 04:44:26.646977] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.146 [2024-10-15 04:44:26.646990] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.146 [2024-10-15 04:44:26.647000] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.146 [2024-10-15 04:44:26.647016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.146 [2024-10-15 04:44:26.647027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.146 [2024-10-15 04:44:26.647038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.146 [2024-10-15 04:44:26.647047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.146 [2024-10-15 04:44:26.647059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.146 [2024-10-15 04:44:26.647069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.146 [2024-10-15 04:44:26.647083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:20:37.146 [2024-10-15 04:44:26.647092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.404 [2024-10-15 04:44:26.667093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.404 [2024-10-15 04:44:26.667139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.404 [2024-10-15 04:44:26.667156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.955 ms 00:20:37.404 [2024-10-15 04:44:26.667166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.404 [2024-10-15 04:44:26.667706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.404 [2024-10-15 04:44:26.667726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.404 [2024-10-15 04:44:26.667740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:20:37.404 [2024-10-15 04:44:26.667750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.404 [2024-10-15 04:44:26.734745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.404 [2024-10-15 04:44:26.734803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.404 [2024-10-15 04:44:26.734827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.404 [2024-10-15 04:44:26.734839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.404 [2024-10-15 04:44:26.734917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.404 [2024-10-15 04:44:26.734929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.404 [2024-10-15 04:44:26.734942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.404 [2024-10-15 04:44:26.734952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.404 [2024-10-15 04:44:26.735060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.404 [2024-10-15 04:44:26.735074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.404 [2024-10-15 04:44:26.735087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.404 [2024-10-15 04:44:26.735097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.404 [2024-10-15 04:44:26.735123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.404 [2024-10-15 04:44:26.735133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.404 [2024-10-15 04:44:26.735146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.404 [2024-10-15 04:44:26.735155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.404 [2024-10-15 04:44:26.860762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.404 [2024-10-15 04:44:26.860843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.404 [2024-10-15 04:44:26.860862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.404 [2024-10-15 04:44:26.860872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.663 [2024-10-15 04:44:26.962622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.663 [2024-10-15 04:44:26.962690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.663 [2024-10-15 04:44:26.962708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.663 [2024-10-15 04:44:26.962718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.663 [2024-10-15 04:44:26.962849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.663 [2024-10-15 04:44:26.962866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.663 [2024-10-15 04:44:26.962879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.663 [2024-10-15 04:44:26.962889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.663 [2024-10-15 04:44:26.962955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.663 [2024-10-15 04:44:26.962967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.663 [2024-10-15 04:44:26.962980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.663 [2024-10-15 04:44:26.962990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.663 [2024-10-15 04:44:26.963114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.663 [2024-10-15 04:44:26.963127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.663 [2024-10-15 04:44:26.963143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.663 [2024-10-15 04:44:26.963153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.663 [2024-10-15 04:44:26.963193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.663 [2024-10-15 04:44:26.963205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:37.664 [2024-10-15 04:44:26.963218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.664 [2024-10-15 04:44:26.963228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.664 [2024-10-15 04:44:26.963269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.664 [2024-10-15 04:44:26.963281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.664 [2024-10-15 04:44:26.963296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.664 [2024-10-15 04:44:26.963306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.664 [2024-10-15 04:44:26.963353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.664 [2024-10-15 04:44:26.963364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.664 [2024-10-15 04:44:26.963376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.664 [2024-10-15 04:44:26.963386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.664 [2024-10-15 04:44:26.963516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.072 ms, result 0 00:20:37.664 true 00:20:37.664 04:44:26 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76522 00:20:37.664 04:44:26 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76522 ']' 00:20:37.664 04:44:26 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76522 00:20:37.664 04:44:26 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:20:37.664 04:44:27 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:37.664 04:44:27 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76522 00:20:37.664 04:44:27 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:37.664 04:44:27 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:37.664 killing process with pid 76522 00:20:37.664 04:44:27 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76522' 00:20:37.664 04:44:27 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 76522 00:20:37.664 04:44:27 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 76522 00:20:42.936 04:44:32 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:47.124 262144+0 records in 00:20:47.124 262144+0 records out 00:20:47.124 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.31698 s, 249 MB/s 00:20:47.124 04:44:36 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:49.028 04:44:38 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:49.028 [2024-10-15 04:44:38.202235] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:20:49.028 [2024-10-15 04:44:38.202552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76782 ] 00:20:49.028 [2024-10-15 04:44:38.377710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.029 [2024-10-15 04:44:38.496293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.598 [2024-10-15 04:44:38.884528] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:49.598 [2024-10-15 04:44:38.884604] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:49.598 [2024-10-15 04:44:39.046643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.046694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:49.598 [2024-10-15 04:44:39.046710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:49.598 [2024-10-15 04:44:39.046727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.046779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.046791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.598 [2024-10-15 04:44:39.046802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:49.598 [2024-10-15 04:44:39.046825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.046848] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:49.598 [2024-10-15 04:44:39.047921] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:49.598 [2024-10-15 04:44:39.047948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.047958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.598 [2024-10-15 04:44:39.047970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:20:49.598 [2024-10-15 04:44:39.047980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.049545] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:49.598 [2024-10-15 04:44:39.068942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.068981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:49.598 [2024-10-15 04:44:39.068995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.428 ms 00:20:49.598 [2024-10-15 04:44:39.069006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.069089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.069110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:49.598 [2024-10-15 04:44:39.069122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:49.598 [2024-10-15 04:44:39.069132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.076012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.076043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.598 [2024-10-15 04:44:39.076056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.815 ms 00:20:49.598 [2024-10-15 04:44:39.076066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.076150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.076162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.598 [2024-10-15 04:44:39.076172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:49.598 [2024-10-15 04:44:39.076182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.076225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.076237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:49.598 [2024-10-15 04:44:39.076247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:49.598 [2024-10-15 04:44:39.076257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.076282] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:49.598 [2024-10-15 04:44:39.081096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.081126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.598 [2024-10-15 04:44:39.081137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.828 ms 00:20:49.598 [2024-10-15 04:44:39.081148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.081181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.081191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:49.598 [2024-10-15 04:44:39.081202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:49.598 [2024-10-15 04:44:39.081211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.081288] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:49.598 [2024-10-15 04:44:39.081325] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:49.598 [2024-10-15 04:44:39.081364] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:49.598 [2024-10-15 04:44:39.081385] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:49.598 [2024-10-15 04:44:39.081474] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:49.598 [2024-10-15 04:44:39.081487] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:49.598 [2024-10-15 04:44:39.081499] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:49.598 [2024-10-15 04:44:39.081512] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:49.598 [2024-10-15 04:44:39.081524] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:49.598 [2024-10-15 04:44:39.081535] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:49.598 [2024-10-15 04:44:39.081545] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:49.598 [2024-10-15 04:44:39.081555] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:49.598 [2024-10-15 04:44:39.081564] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:49.598 [2024-10-15 04:44:39.081575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.081588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:49.598 [2024-10-15 04:44:39.081598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:20:49.598 [2024-10-15 04:44:39.081608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.081684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.598 [2024-10-15 04:44:39.081695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:49.598 [2024-10-15 04:44:39.081705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:49.598 [2024-10-15 04:44:39.081715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.598 [2024-10-15 04:44:39.081807] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:49.598 [2024-10-15 04:44:39.081834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:49.598 [2024-10-15 04:44:39.081849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:49.598 [2024-10-15 04:44:39.081859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.598 [2024-10-15 04:44:39.081869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:49.598 [2024-10-15 04:44:39.081879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:49.598 [2024-10-15 04:44:39.081889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:49.598 [2024-10-15 04:44:39.081899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:49.598 [2024-10-15 04:44:39.081908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:49.598 [2024-10-15 04:44:39.081917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:49.598 [2024-10-15 04:44:39.081928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:49.598 [2024-10-15 04:44:39.081938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:49.598 [2024-10-15 04:44:39.081947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:49.598 [2024-10-15 04:44:39.081956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:49.598 [2024-10-15 04:44:39.081965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:49.598 [2024-10-15 04:44:39.081983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.598 [2024-10-15 04:44:39.081993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:49.598 [2024-10-15 04:44:39.082002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:49.598 [2024-10-15 04:44:39.082011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.598 [2024-10-15 04:44:39.082020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:49.598 [2024-10-15 04:44:39.082029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:49.598 [2024-10-15 04:44:39.082038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.598 [2024-10-15 04:44:39.082048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:49.598 [2024-10-15 04:44:39.082057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:49.598 [2024-10-15 04:44:39.082066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.598 [2024-10-15 04:44:39.082074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:49.598 [2024-10-15 04:44:39.082084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:49.598 [2024-10-15 04:44:39.082093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.598 [2024-10-15 04:44:39.082102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:49.598 [2024-10-15 04:44:39.082111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:49.599 [2024-10-15 04:44:39.082120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.599 [2024-10-15 04:44:39.082129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:49.599 [2024-10-15 04:44:39.082138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:49.599 [2024-10-15 04:44:39.082147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:49.599 [2024-10-15 04:44:39.082156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:49.599 [2024-10-15 04:44:39.082165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:49.599 [2024-10-15 04:44:39.082174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:49.599 [2024-10-15 04:44:39.082183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:49.599 [2024-10-15 04:44:39.082192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:49.599 [2024-10-15 04:44:39.082201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.599 [2024-10-15 04:44:39.082210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:49.599 [2024-10-15 04:44:39.082219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:49.599 [2024-10-15 04:44:39.082229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.599 [2024-10-15 04:44:39.082238] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:49.599 [2024-10-15 04:44:39.082248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:49.599 [2024-10-15 04:44:39.082257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:49.599 [2024-10-15 04:44:39.082267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.599 [2024-10-15 04:44:39.082277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:49.599 [2024-10-15 04:44:39.082286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:49.599 [2024-10-15 04:44:39.082295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:49.599 [2024-10-15 04:44:39.082304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:49.599 [2024-10-15 04:44:39.082312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:49.599 [2024-10-15 04:44:39.082322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:49.599 [2024-10-15 04:44:39.082332] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:49.599 [2024-10-15 04:44:39.082344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:49.599 [2024-10-15 04:44:39.082355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:49.599 [2024-10-15 04:44:39.082365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:49.599 [2024-10-15 04:44:39.082376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:49.599 [2024-10-15 04:44:39.082386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:49.599 [2024-10-15 04:44:39.082397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:49.599 [2024-10-15 04:44:39.082407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:49.599 [2024-10-15 04:44:39.082417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:49.599 [2024-10-15 04:44:39.082427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:49.599 [2024-10-15 04:44:39.082437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:49.599 [2024-10-15 04:44:39.082447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:49.599 [2024-10-15 04:44:39.082457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:49.599 [2024-10-15 04:44:39.082467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:49.599 [2024-10-15 04:44:39.082477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:49.599 [2024-10-15 04:44:39.082487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:49.599 [2024-10-15 04:44:39.082497] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:49.599 [2024-10-15 04:44:39.082508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:49.599 [2024-10-15 04:44:39.082523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:49.599 [2024-10-15 04:44:39.082533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:49.599 [2024-10-15 04:44:39.082543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:49.599 [2024-10-15 04:44:39.082554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:49.599 [2024-10-15 04:44:39.082564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.599 [2024-10-15 04:44:39.082574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:49.599 [2024-10-15 04:44:39.082584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:20:49.599 [2024-10-15 04:44:39.082594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-10-15 04:44:39.122123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-10-15 04:44:39.122178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.858 [2024-10-15 04:44:39.122194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.545 ms 00:20:49.858 [2024-10-15 04:44:39.122205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-10-15 04:44:39.122305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-10-15 04:44:39.122316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:49.858 [2024-10-15 04:44:39.122327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:49.858 [2024-10-15 04:44:39.122337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-10-15 04:44:39.179693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-10-15 04:44:39.179748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.858 [2024-10-15 04:44:39.179763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.371 ms 00:20:49.858 [2024-10-15 04:44:39.179774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-10-15 04:44:39.179840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-10-15 04:44:39.179852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.858 [2024-10-15 04:44:39.179863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:49.858 [2024-10-15 04:44:39.179877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.858 [2024-10-15 04:44:39.180369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.858 [2024-10-15 04:44:39.180388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.858 [2024-10-15 04:44:39.180399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:20:49.858 [2024-10-15 04:44:39.180409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.859 [2024-10-15 04:44:39.180547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.859 [2024-10-15 04:44:39.180562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.859 [2024-10-15 04:44:39.180573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:20:49.859 [2024-10-15 04:44:39.180588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.859 [2024-10-15 04:44:39.199781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.859 [2024-10-15 04:44:39.199832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.859 [2024-10-15 04:44:39.199851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.199 ms 00:20:49.859 [2024-10-15 04:44:39.199862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.859 [2024-10-15 04:44:39.219124] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:49.859 [2024-10-15 04:44:39.219184] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:49.859 [2024-10-15 04:44:39.219202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.859 [2024-10-15 04:44:39.219213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:49.859 [2024-10-15 04:44:39.219227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.238 ms 00:20:49.859 [2024-10-15 04:44:39.219236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.859 [2024-10-15 04:44:39.250288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.859 [2024-10-15 04:44:39.250363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:49.859 [2024-10-15 04:44:39.250388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.032 ms 00:20:49.859 [2024-10-15 04:44:39.250398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.859 [2024-10-15 04:44:39.269389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.859 [2024-10-15 04:44:39.269460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:49.859 [2024-10-15 04:44:39.269475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.970 ms 00:20:49.859 [2024-10-15 04:44:39.269486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.859 [2024-10-15 04:44:39.287229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.859 [2024-10-15 04:44:39.287268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:49.859 [2024-10-15 04:44:39.287282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.726 ms 00:20:49.859 [2024-10-15 04:44:39.287292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.859 [2024-10-15 04:44:39.288100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.859 [2024-10-15 04:44:39.288125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:49.859 [2024-10-15 04:44:39.288138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:20:49.859 [2024-10-15 04:44:39.288149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.118 [2024-10-15 04:44:39.375512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.118 [2024-10-15 04:44:39.375595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:50.118 [2024-10-15 04:44:39.375614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.475 ms 00:20:50.118 [2024-10-15 04:44:39.375625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.118 [2024-10-15 04:44:39.389468] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:50.118 [2024-10-15 04:44:39.392751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.118 [2024-10-15 04:44:39.392796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:50.118 [2024-10-15 04:44:39.392812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.069 ms 00:20:50.118 [2024-10-15 04:44:39.392831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.118 [2024-10-15 04:44:39.392948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.118 [2024-10-15 04:44:39.392964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:50.118 [2024-10-15 04:44:39.392975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:50.118 [2024-10-15 04:44:39.392986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.118 [2024-10-15 04:44:39.393107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.118 [2024-10-15 04:44:39.393130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:50.118 [2024-10-15 04:44:39.393141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:50.118 [2024-10-15 04:44:39.393152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.118 [2024-10-15 04:44:39.393180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.118 [2024-10-15 04:44:39.393191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:50.118 [2024-10-15 04:44:39.393201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:50.118 [2024-10-15 04:44:39.393211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.118 [2024-10-15 04:44:39.393254] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:50.118 [2024-10-15 04:44:39.393266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.118 [2024-10-15 04:44:39.393280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:50.118 [2024-10-15 04:44:39.393290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:50.118 [2024-10-15 04:44:39.393300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.118 [2024-10-15 04:44:39.431978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.118 [2024-10-15 04:44:39.432044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:50.119 [2024-10-15 04:44:39.432060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.719 ms 00:20:50.119 [2024-10-15 04:44:39.432071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.119 [2024-10-15 04:44:39.432171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.119 [2024-10-15 04:44:39.432184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:50.119 [2024-10-15 04:44:39.432194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:50.119 [2024-10-15 04:44:39.432204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.119 [2024-10-15 04:44:39.433412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.954 ms, result 0 00:20:51.055  [2024-10-15T04:44:41.496Z] Copying: 27/1024 [MB] (27 MBps) [2024-10-15T04:44:42.872Z] Copying: 55/1024 [MB] (28 MBps) [2024-10-15T04:44:43.497Z] Copying: 83/1024 [MB] (28 MBps) [2024-10-15T04:44:44.875Z] Copying: 110/1024 [MB] (27 MBps) [2024-10-15T04:44:45.442Z] Copying: 137/1024 [MB] (26 MBps) [2024-10-15T04:44:46.819Z] Copying: 165/1024 [MB] (27 MBps) [2024-10-15T04:44:47.780Z] Copying: 193/1024 [MB] (27 MBps) [2024-10-15T04:44:48.716Z] Copying: 221/1024 [MB] (27 MBps) [2024-10-15T04:44:49.654Z] Copying: 249/1024 [MB] (27 MBps) [2024-10-15T04:44:50.589Z] Copying: 277/1024 [MB] (28 MBps) [2024-10-15T04:44:51.526Z] Copying: 306/1024 [MB] (28 MBps) [2024-10-15T04:44:52.516Z] Copying: 334/1024 [MB] (28 MBps) [2024-10-15T04:44:53.451Z] Copying: 363/1024 [MB] (28 MBps) [2024-10-15T04:44:54.827Z] Copying: 391/1024 [MB] (27 MBps) [2024-10-15T04:44:55.764Z] Copying: 418/1024 [MB] (27 MBps) [2024-10-15T04:44:56.700Z] Copying: 445/1024 [MB] (26 MBps) [2024-10-15T04:44:57.637Z] Copying: 471/1024 [MB] (26 MBps) [2024-10-15T04:44:58.574Z] Copying: 499/1024 [MB] (28 MBps) [2024-10-15T04:44:59.517Z] Copying: 527/1024 [MB] (27 MBps) [2024-10-15T04:45:00.451Z] Copying: 554/1024 [MB] (27 MBps) [2024-10-15T04:45:01.827Z] Copying: 588/1024 [MB] (33 MBps) [2024-10-15T04:45:02.762Z] Copying: 620/1024 [MB] (32 MBps) [2024-10-15T04:45:03.698Z] Copying: 653/1024 [MB] (32 MBps) [2024-10-15T04:45:04.634Z] Copying: 686/1024 [MB] (33 MBps) [2024-10-15T04:45:05.570Z] Copying: 720/1024 [MB] (33 MBps) [2024-10-15T04:45:06.505Z] Copying: 752/1024 [MB] (32 MBps) [2024-10-15T04:45:07.540Z] Copying: 784/1024 [MB] (31 MBps) [2024-10-15T04:45:08.477Z] Copying: 815/1024 [MB] (31 MBps) [2024-10-15T04:45:09.414Z] Copying: 847/1024 [MB] (31 MBps) [2024-10-15T04:45:10.788Z] Copying: 878/1024 [MB] (30 MBps) [2024-10-15T04:45:11.723Z] Copying: 907/1024 [MB] (29 MBps) [2024-10-15T04:45:12.659Z] Copying: 935/1024 [MB] (28 MBps) [2024-10-15T04:45:13.601Z] Copying: 963/1024 [MB] (27 MBps) [2024-10-15T04:45:14.537Z] Copying: 991/1024 [MB] (28 MBps) [2024-10-15T04:45:14.537Z] Copying: 1021/1024 [MB] (29 MBps) [2024-10-15T04:45:14.537Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-10-15 04:45:14.483974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.033 [2024-10-15 04:45:14.484032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:25.033 [2024-10-15 04:45:14.484049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:25.033 [2024-10-15 04:45:14.484060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.033 [2024-10-15 04:45:14.484084] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:25.033 [2024-10-15 04:45:14.488560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.033 [2024-10-15 04:45:14.488601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:25.033 [2024-10-15 04:45:14.488614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.465 ms 00:21:25.033 [2024-10-15 04:45:14.488624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.033 [2024-10-15 04:45:14.490349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.033 [2024-10-15 04:45:14.490394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:25.033 [2024-10-15 04:45:14.490407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.695 ms 00:21:25.033 [2024-10-15 04:45:14.490418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.033 [2024-10-15 04:45:14.507770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.033 [2024-10-15 04:45:14.507826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:25.033 [2024-10-15 04:45:14.507841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.361 ms 00:21:25.033 [2024-10-15 04:45:14.507852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.033 [2024-10-15 04:45:14.513217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.033 [2024-10-15 04:45:14.513272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:25.033 [2024-10-15 04:45:14.513285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.337 ms 00:21:25.033 [2024-10-15 04:45:14.513295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.293 [2024-10-15 04:45:14.551228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.294 [2024-10-15 04:45:14.551288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:25.294 [2024-10-15 04:45:14.551303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.925 ms 00:21:25.294 [2024-10-15 04:45:14.551314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.294 [2024-10-15 04:45:14.573335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.294 [2024-10-15 04:45:14.573403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:25.294 [2024-10-15 04:45:14.573420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.004 ms 00:21:25.294 [2024-10-15 04:45:14.573431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.294 [2024-10-15 04:45:14.573594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.294 [2024-10-15 04:45:14.573617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:25.294 [2024-10-15 04:45:14.573638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:21:25.294 [2024-10-15 04:45:14.573649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.294 [2024-10-15 04:45:14.610703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.294 [2024-10-15 04:45:14.610761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:25.294 [2024-10-15 04:45:14.610777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.094 ms 00:21:25.294 [2024-10-15 04:45:14.610788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.294 [2024-10-15 04:45:14.649385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.294 [2024-10-15 04:45:14.649444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:25.294 [2024-10-15 04:45:14.649476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.602 ms 00:21:25.294 [2024-10-15 04:45:14.649486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.294 [2024-10-15 04:45:14.687072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.294 [2024-10-15 04:45:14.687128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:25.294 [2024-10-15 04:45:14.687143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.598 ms 00:21:25.294 [2024-10-15 04:45:14.687153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.294 [2024-10-15 04:45:14.725907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.294 [2024-10-15 04:45:14.725972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:25.294 [2024-10-15 04:45:14.725989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.730 ms 00:21:25.294 [2024-10-15 04:45:14.726000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.294 [2024-10-15 04:45:14.726051] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:25.294 [2024-10-15 04:45:14.726069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:25.294 [2024-10-15 04:45:14.726791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.726992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:25.295 [2024-10-15 04:45:14.727184] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:25.295 [2024-10-15 04:45:14.727199] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ecc6ff85-7abb-45d9-8cbd-e91e1968175e 00:21:25.295 [2024-10-15 04:45:14.727213] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:25.295 [2024-10-15 04:45:14.727223] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:25.295 [2024-10-15 04:45:14.727232] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:25.295 [2024-10-15 04:45:14.727243] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:25.295 [2024-10-15 04:45:14.727252] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:25.295 [2024-10-15 04:45:14.727263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:25.295 [2024-10-15 04:45:14.727272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:25.295 [2024-10-15 04:45:14.727293] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:25.295 [2024-10-15 04:45:14.727302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:25.295 [2024-10-15 04:45:14.727312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.295 [2024-10-15 04:45:14.727322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:25.295 [2024-10-15 04:45:14.727332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:21:25.295 [2024-10-15 04:45:14.727342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.295 [2024-10-15 04:45:14.747659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.295 [2024-10-15 04:45:14.747727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:25.295 [2024-10-15 04:45:14.747743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.306 ms 00:21:25.295 [2024-10-15 04:45:14.747753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.295 [2024-10-15 04:45:14.748349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.295 [2024-10-15 04:45:14.748366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:25.295 [2024-10-15 04:45:14.748377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:21:25.295 [2024-10-15 04:45:14.748387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.554 [2024-10-15 04:45:14.801244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.554 [2024-10-15 04:45:14.801330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:25.554 [2024-10-15 04:45:14.801346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.554 [2024-10-15 04:45:14.801356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.554 [2024-10-15 04:45:14.801428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.554 [2024-10-15 04:45:14.801439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:25.554 [2024-10-15 04:45:14.801449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.554 [2024-10-15 04:45:14.801458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.554 [2024-10-15 04:45:14.801560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.554 [2024-10-15 04:45:14.801655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:25.554 [2024-10-15 04:45:14.801665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.554 [2024-10-15 04:45:14.801675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.554 [2024-10-15 04:45:14.801693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.554 [2024-10-15 04:45:14.801703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:25.554 [2024-10-15 04:45:14.801713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.554 [2024-10-15 04:45:14.801723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.555 [2024-10-15 04:45:14.927118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.555 [2024-10-15 04:45:14.927190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:25.555 [2024-10-15 04:45:14.927205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.555 [2024-10-15 04:45:14.927216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.555 [2024-10-15 04:45:15.030364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.555 [2024-10-15 04:45:15.030431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:25.555 [2024-10-15 04:45:15.030447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.555 [2024-10-15 04:45:15.030458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.555 [2024-10-15 04:45:15.030555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.555 [2024-10-15 04:45:15.030566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:25.555 [2024-10-15 04:45:15.030577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.555 [2024-10-15 04:45:15.030587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.555 [2024-10-15 04:45:15.030635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.555 [2024-10-15 04:45:15.030650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:25.555 [2024-10-15 04:45:15.030660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.555 [2024-10-15 04:45:15.030669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.555 [2024-10-15 04:45:15.030770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.555 [2024-10-15 04:45:15.030793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:25.555 [2024-10-15 04:45:15.030804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.555 [2024-10-15 04:45:15.030830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.555 [2024-10-15 04:45:15.030868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.555 [2024-10-15 04:45:15.030885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:25.555 [2024-10-15 04:45:15.030896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.555 [2024-10-15 04:45:15.030906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.555 [2024-10-15 04:45:15.030941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.555 [2024-10-15 04:45:15.030952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:25.555 [2024-10-15 04:45:15.030966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.555 [2024-10-15 04:45:15.030975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.555 [2024-10-15 04:45:15.031016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.555 [2024-10-15 04:45:15.031030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:25.555 [2024-10-15 04:45:15.031040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.555 [2024-10-15 04:45:15.031051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.555 [2024-10-15 04:45:15.031166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.049 ms, result 0 00:21:27.003 00:21:27.003 00:21:27.003 04:45:16 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:27.003 [2024-10-15 04:45:16.258593] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:21:27.003 [2024-10-15 04:45:16.258726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77172 ] 00:21:27.003 [2024-10-15 04:45:16.429605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.261 [2024-10-15 04:45:16.544648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.520 [2024-10-15 04:45:16.903618] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:27.520 [2024-10-15 04:45:16.903697] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:27.779 [2024-10-15 04:45:17.064199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.779 [2024-10-15 04:45:17.064265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:27.779 [2024-10-15 04:45:17.064281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:27.779 [2024-10-15 04:45:17.064298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.779 [2024-10-15 04:45:17.064350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.779 [2024-10-15 04:45:17.064363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:27.779 [2024-10-15 04:45:17.064373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:27.780 [2024-10-15 04:45:17.064387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.780 [2024-10-15 04:45:17.064409] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:27.780 [2024-10-15 04:45:17.065410] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:27.780 [2024-10-15 04:45:17.065507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.780 [2024-10-15 04:45:17.065518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:27.780 [2024-10-15 04:45:17.065530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:21:27.780 [2024-10-15 04:45:17.065540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.780 [2024-10-15 04:45:17.067118] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:27.780 [2024-10-15 04:45:17.085768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.780 [2024-10-15 04:45:17.085821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:27.780 [2024-10-15 04:45:17.085837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.680 ms 00:21:27.780 [2024-10-15 04:45:17.085848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.780 [2024-10-15 04:45:17.085913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.780 [2024-10-15 04:45:17.085930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:27.780 [2024-10-15 04:45:17.085941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:27.780 [2024-10-15 04:45:17.085951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.780 [2024-10-15 04:45:17.092799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.780 [2024-10-15 04:45:17.092837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:27.780 [2024-10-15 04:45:17.092850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.785 ms 00:21:27.780 [2024-10-15 04:45:17.092861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.780 [2024-10-15 04:45:17.092944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.780 [2024-10-15 04:45:17.092957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:27.780 [2024-10-15 04:45:17.092968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:27.780 [2024-10-15 04:45:17.092978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.780 [2024-10-15 04:45:17.093027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.780 [2024-10-15 04:45:17.093039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:27.780 [2024-10-15 04:45:17.093050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:27.780 [2024-10-15 04:45:17.093060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.780 [2024-10-15 04:45:17.093085] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:27.780 [2024-10-15 04:45:17.097769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.780 [2024-10-15 04:45:17.097806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:27.780 [2024-10-15 04:45:17.097827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.697 ms 00:21:27.780 [2024-10-15 04:45:17.097838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.780 [2024-10-15 04:45:17.097872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.780 [2024-10-15 04:45:17.097882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:27.780 [2024-10-15 04:45:17.097894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:27.780 [2024-10-15 04:45:17.097903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.780 [2024-10-15 04:45:17.097957] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:27.780 [2024-10-15 04:45:17.097980] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:27.780 [2024-10-15 04:45:17.098016] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:27.780 [2024-10-15 04:45:17.098036] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:27.780 [2024-10-15 04:45:17.098125] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:27.780 [2024-10-15 04:45:17.098138] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:27.780 [2024-10-15 04:45:17.098151] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:27.780 [2024-10-15 04:45:17.098164] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:27.780 [2024-10-15 04:45:17.098177] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:27.780 [2024-10-15 04:45:17.098188] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:27.780 [2024-10-15 04:45:17.098198] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:27.780 [2024-10-15 04:45:17.098208] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:27.780 [2024-10-15 04:45:17.098218] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:27.780 [2024-10-15 04:45:17.098228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.780 [2024-10-15 04:45:17.098242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:27.780 [2024-10-15 04:45:17.098252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:21:27.780 [2024-10-15 04:45:17.098262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.780 [2024-10-15 04:45:17.098333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.780 [2024-10-15 04:45:17.098349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:27.780 [2024-10-15 04:45:17.098360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:27.780 [2024-10-15 04:45:17.098370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.780 [2024-10-15 04:45:17.098464] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:27.780 [2024-10-15 04:45:17.098479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:27.780 [2024-10-15 04:45:17.098494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:27.780 [2024-10-15 04:45:17.098504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:27.780 [2024-10-15 04:45:17.098523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:27.780 [2024-10-15 04:45:17.098542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:27.780 [2024-10-15 04:45:17.098552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:27.780 [2024-10-15 04:45:17.098570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:27.780 [2024-10-15 04:45:17.098580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:27.780 [2024-10-15 04:45:17.098589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:27.780 [2024-10-15 04:45:17.098598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:27.780 [2024-10-15 04:45:17.098607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:27.780 [2024-10-15 04:45:17.098625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:27.780 [2024-10-15 04:45:17.098643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:27.780 [2024-10-15 04:45:17.098652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:27.780 [2024-10-15 04:45:17.098672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.780 [2024-10-15 04:45:17.098690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:27.780 [2024-10-15 04:45:17.098699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.780 [2024-10-15 04:45:17.098717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:27.780 [2024-10-15 04:45:17.098726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.780 [2024-10-15 04:45:17.098744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:27.780 [2024-10-15 04:45:17.098753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.780 [2024-10-15 04:45:17.098770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:27.780 [2024-10-15 04:45:17.098779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:27.780 [2024-10-15 04:45:17.098797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:27.780 [2024-10-15 04:45:17.098806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:27.780 [2024-10-15 04:45:17.098825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:27.780 [2024-10-15 04:45:17.098835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:27.780 [2024-10-15 04:45:17.098844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:27.780 [2024-10-15 04:45:17.098853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:27.780 [2024-10-15 04:45:17.098871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:27.780 [2024-10-15 04:45:17.098881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098889] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:27.780 [2024-10-15 04:45:17.098900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:27.780 [2024-10-15 04:45:17.098910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:27.780 [2024-10-15 04:45:17.098919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.780 [2024-10-15 04:45:17.098929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:27.780 [2024-10-15 04:45:17.098938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:27.780 [2024-10-15 04:45:17.098948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:27.780 [2024-10-15 04:45:17.098957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:27.780 [2024-10-15 04:45:17.098967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:27.780 [2024-10-15 04:45:17.098977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:27.780 [2024-10-15 04:45:17.098988] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:27.781 [2024-10-15 04:45:17.099001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:27.781 [2024-10-15 04:45:17.099012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:27.781 [2024-10-15 04:45:17.099022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:27.781 [2024-10-15 04:45:17.099033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:27.781 [2024-10-15 04:45:17.099043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:27.781 [2024-10-15 04:45:17.099053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:27.781 [2024-10-15 04:45:17.099064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:27.781 [2024-10-15 04:45:17.099073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:27.781 [2024-10-15 04:45:17.099084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:27.781 [2024-10-15 04:45:17.099094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:27.781 [2024-10-15 04:45:17.099104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:27.781 [2024-10-15 04:45:17.099114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:27.781 [2024-10-15 04:45:17.099124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:27.781 [2024-10-15 04:45:17.099134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:27.781 [2024-10-15 04:45:17.099145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:27.781 [2024-10-15 04:45:17.099155] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:27.781 [2024-10-15 04:45:17.099166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:27.781 [2024-10-15 04:45:17.099182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:27.781 [2024-10-15 04:45:17.099192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:27.781 [2024-10-15 04:45:17.099202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:27.781 [2024-10-15 04:45:17.099217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:27.781 [2024-10-15 04:45:17.099228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.781 [2024-10-15 04:45:17.099239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:27.781 [2024-10-15 04:45:17.099249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:21:27.781 [2024-10-15 04:45:17.099259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.781 [2024-10-15 04:45:17.137631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.781 [2024-10-15 04:45:17.137699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:27.781 [2024-10-15 04:45:17.137715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.385 ms 00:21:27.781 [2024-10-15 04:45:17.137726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.781 [2024-10-15 04:45:17.137845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.781 [2024-10-15 04:45:17.137863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:27.781 [2024-10-15 04:45:17.137874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:27.781 [2024-10-15 04:45:17.137884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.781 [2024-10-15 04:45:17.190624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.781 [2024-10-15 04:45:17.190687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:27.781 [2024-10-15 04:45:17.190702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.745 ms 00:21:27.781 [2024-10-15 04:45:17.190712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.781 [2024-10-15 04:45:17.190769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.781 [2024-10-15 04:45:17.190780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:27.781 [2024-10-15 04:45:17.190791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:27.781 [2024-10-15 04:45:17.190801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.781 [2024-10-15 04:45:17.191307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.781 [2024-10-15 04:45:17.191330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:27.781 [2024-10-15 04:45:17.191342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:21:27.781 [2024-10-15 04:45:17.191353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.781 [2024-10-15 04:45:17.191473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.781 [2024-10-15 04:45:17.191487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:27.781 [2024-10-15 04:45:17.191497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:27.781 [2024-10-15 04:45:17.191507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.781 [2024-10-15 04:45:17.210933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.781 [2024-10-15 04:45:17.210985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:27.781 [2024-10-15 04:45:17.211001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.431 ms 00:21:27.781 [2024-10-15 04:45:17.211015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.781 [2024-10-15 04:45:17.230445] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:27.781 [2024-10-15 04:45:17.230522] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:27.781 [2024-10-15 04:45:17.230539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.781 [2024-10-15 04:45:17.230550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:27.781 [2024-10-15 04:45:17.230564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.425 ms 00:21:27.781 [2024-10-15 04:45:17.230574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.781 [2024-10-15 04:45:17.260926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.781 [2024-10-15 04:45:17.260981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:27.781 [2024-10-15 04:45:17.260997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.343 ms 00:21:27.781 [2024-10-15 04:45:17.261008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.781 [2024-10-15 04:45:17.279925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.781 [2024-10-15 04:45:17.279969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:27.781 [2024-10-15 04:45:17.279983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.887 ms 00:21:27.781 [2024-10-15 04:45:17.279994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.040 [2024-10-15 04:45:17.297904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.040 [2024-10-15 04:45:17.297950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:28.040 [2024-10-15 04:45:17.297963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.896 ms 00:21:28.040 [2024-10-15 04:45:17.297973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.040 [2024-10-15 04:45:17.298754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.040 [2024-10-15 04:45:17.298788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:28.040 [2024-10-15 04:45:17.298801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:21:28.041 [2024-10-15 04:45:17.298812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.041 [2024-10-15 04:45:17.383567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.041 [2024-10-15 04:45:17.383641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:28.041 [2024-10-15 04:45:17.383658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.839 ms 00:21:28.041 [2024-10-15 04:45:17.383677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.041 [2024-10-15 04:45:17.396393] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:28.041 [2024-10-15 04:45:17.399654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.041 [2024-10-15 04:45:17.399695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:28.041 [2024-10-15 04:45:17.399711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.935 ms 00:21:28.041 [2024-10-15 04:45:17.399722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.041 [2024-10-15 04:45:17.399839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.041 [2024-10-15 04:45:17.399853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:28.041 [2024-10-15 04:45:17.399865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:28.041 [2024-10-15 04:45:17.399875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.041 [2024-10-15 04:45:17.399968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.041 [2024-10-15 04:45:17.399981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:28.041 [2024-10-15 04:45:17.399992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:28.041 [2024-10-15 04:45:17.400002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.041 [2024-10-15 04:45:17.400026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.041 [2024-10-15 04:45:17.400037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:28.041 [2024-10-15 04:45:17.400047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:28.041 [2024-10-15 04:45:17.400058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.041 [2024-10-15 04:45:17.400090] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:28.041 [2024-10-15 04:45:17.400101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.041 [2024-10-15 04:45:17.400115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:28.041 [2024-10-15 04:45:17.400126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:28.041 [2024-10-15 04:45:17.400135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.041 [2024-10-15 04:45:17.436601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.041 [2024-10-15 04:45:17.436666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:28.041 [2024-10-15 04:45:17.436681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.501 ms 00:21:28.041 [2024-10-15 04:45:17.436692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.041 [2024-10-15 04:45:17.436773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.041 [2024-10-15 04:45:17.436786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:28.041 [2024-10-15 04:45:17.436797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:28.041 [2024-10-15 04:45:17.436807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.041 [2024-10-15 04:45:17.437974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.926 ms, result 0 00:21:29.420  [2024-10-15T04:45:19.859Z] Copying: 30/1024 [MB] (30 MBps) [2024-10-15T04:45:20.875Z] Copying: 65/1024 [MB] (35 MBps) [2024-10-15T04:45:21.809Z] Copying: 99/1024 [MB] (33 MBps) [2024-10-15T04:45:22.743Z] Copying: 128/1024 [MB] (29 MBps) [2024-10-15T04:45:23.677Z] Copying: 158/1024 [MB] (30 MBps) [2024-10-15T04:45:25.053Z] Copying: 191/1024 [MB] (32 MBps) [2024-10-15T04:45:25.693Z] Copying: 222/1024 [MB] (31 MBps) [2024-10-15T04:45:27.071Z] Copying: 253/1024 [MB] (31 MBps) [2024-10-15T04:45:27.639Z] Copying: 284/1024 [MB] (30 MBps) [2024-10-15T04:45:29.014Z] Copying: 312/1024 [MB] (28 MBps) [2024-10-15T04:45:29.950Z] Copying: 345/1024 [MB] (32 MBps) [2024-10-15T04:45:30.886Z] Copying: 378/1024 [MB] (32 MBps) [2024-10-15T04:45:31.863Z] Copying: 406/1024 [MB] (28 MBps) [2024-10-15T04:45:32.800Z] Copying: 434/1024 [MB] (28 MBps) [2024-10-15T04:45:33.736Z] Copying: 465/1024 [MB] (30 MBps) [2024-10-15T04:45:34.672Z] Copying: 498/1024 [MB] (32 MBps) [2024-10-15T04:45:35.630Z] Copying: 531/1024 [MB] (32 MBps) [2024-10-15T04:45:37.002Z] Copying: 564/1024 [MB] (33 MBps) [2024-10-15T04:45:37.936Z] Copying: 595/1024 [MB] (31 MBps) [2024-10-15T04:45:38.871Z] Copying: 627/1024 [MB] (31 MBps) [2024-10-15T04:45:39.806Z] Copying: 656/1024 [MB] (29 MBps) [2024-10-15T04:45:40.772Z] Copying: 685/1024 [MB] (29 MBps) [2024-10-15T04:45:41.708Z] Copying: 714/1024 [MB] (28 MBps) [2024-10-15T04:45:42.642Z] Copying: 742/1024 [MB] (27 MBps) [2024-10-15T04:45:44.023Z] Copying: 769/1024 [MB] (27 MBps) [2024-10-15T04:45:44.965Z] Copying: 797/1024 [MB] (27 MBps) [2024-10-15T04:45:45.903Z] Copying: 825/1024 [MB] (27 MBps) [2024-10-15T04:45:46.838Z] Copying: 854/1024 [MB] (28 MBps) [2024-10-15T04:45:47.772Z] Copying: 883/1024 [MB] (29 MBps) [2024-10-15T04:45:48.707Z] Copying: 913/1024 [MB] (29 MBps) [2024-10-15T04:45:49.643Z] Copying: 942/1024 [MB] (29 MBps) [2024-10-15T04:45:51.019Z] Copying: 972/1024 [MB] (29 MBps) [2024-10-15T04:45:51.587Z] Copying: 1000/1024 [MB] (28 MBps) [2024-10-15T04:45:52.968Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-10-15 04:45:52.625537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.625612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:03.464 [2024-10-15 04:45:52.625629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:03.464 [2024-10-15 04:45:52.625640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.625663] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:03.464 [2024-10-15 04:45:52.629854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.629889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:03.464 [2024-10-15 04:45:52.629902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.178 ms 00:22:03.464 [2024-10-15 04:45:52.629928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.630303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.630315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:03.464 [2024-10-15 04:45:52.630326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:22:03.464 [2024-10-15 04:45:52.630336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.633107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.633128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:03.464 [2024-10-15 04:45:52.633140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:22:03.464 [2024-10-15 04:45:52.633151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.638185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.638225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:03.464 [2024-10-15 04:45:52.638237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.021 ms 00:22:03.464 [2024-10-15 04:45:52.638247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.675013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.675053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:03.464 [2024-10-15 04:45:52.675068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.723 ms 00:22:03.464 [2024-10-15 04:45:52.675078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.695939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.695990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:03.464 [2024-10-15 04:45:52.696005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.853 ms 00:22:03.464 [2024-10-15 04:45:52.696016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.696149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.696163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:03.464 [2024-10-15 04:45:52.696181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:03.464 [2024-10-15 04:45:52.696191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.732669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.732707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:03.464 [2024-10-15 04:45:52.732721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.520 ms 00:22:03.464 [2024-10-15 04:45:52.732730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.770807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.770868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:03.464 [2024-10-15 04:45:52.770883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.096 ms 00:22:03.464 [2024-10-15 04:45:52.770894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.807011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.807060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:03.464 [2024-10-15 04:45:52.807075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.132 ms 00:22:03.464 [2024-10-15 04:45:52.807086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.842393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.464 [2024-10-15 04:45:52.842442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:03.464 [2024-10-15 04:45:52.842456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.279 ms 00:22:03.464 [2024-10-15 04:45:52.842467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.464 [2024-10-15 04:45:52.842506] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:03.464 [2024-10-15 04:45:52.842523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:03.464 [2024-10-15 04:45:52.842918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.842928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.842938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.842949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.842959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.842969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.842979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.842989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.842999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:03.465 [2024-10-15 04:45:52.843597] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:03.465 [2024-10-15 04:45:52.843611] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ecc6ff85-7abb-45d9-8cbd-e91e1968175e 00:22:03.465 [2024-10-15 04:45:52.843626] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:03.465 [2024-10-15 04:45:52.843636] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:03.465 [2024-10-15 04:45:52.843646] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:03.465 [2024-10-15 04:45:52.843656] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:03.465 [2024-10-15 04:45:52.843665] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:03.465 [2024-10-15 04:45:52.843675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:03.465 [2024-10-15 04:45:52.843696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:03.465 [2024-10-15 04:45:52.843705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:03.465 [2024-10-15 04:45:52.843714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:03.465 [2024-10-15 04:45:52.843724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.465 [2024-10-15 04:45:52.843734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:03.465 [2024-10-15 04:45:52.843744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:22:03.465 [2024-10-15 04:45:52.843754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.465 [2024-10-15 04:45:52.863777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.465 [2024-10-15 04:45:52.863846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:03.465 [2024-10-15 04:45:52.863860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.997 ms 00:22:03.465 [2024-10-15 04:45:52.863870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.465 [2024-10-15 04:45:52.864333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.465 [2024-10-15 04:45:52.864354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:03.465 [2024-10-15 04:45:52.864365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:22:03.465 [2024-10-15 04:45:52.864375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.465 [2024-10-15 04:45:52.915034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.465 [2024-10-15 04:45:52.915076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.465 [2024-10-15 04:45:52.915089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.465 [2024-10-15 04:45:52.915099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.465 [2024-10-15 04:45:52.915167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.465 [2024-10-15 04:45:52.915179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.465 [2024-10-15 04:45:52.915189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.465 [2024-10-15 04:45:52.915198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.465 [2024-10-15 04:45:52.915269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.465 [2024-10-15 04:45:52.915281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.465 [2024-10-15 04:45:52.915292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.465 [2024-10-15 04:45:52.915301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.465 [2024-10-15 04:45:52.915318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.465 [2024-10-15 04:45:52.915328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.465 [2024-10-15 04:45:52.915338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.466 [2024-10-15 04:45:52.915347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.725 [2024-10-15 04:45:53.030873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.725 [2024-10-15 04:45:53.030926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:03.725 [2024-10-15 04:45:53.030957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.725 [2024-10-15 04:45:53.030967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.725 [2024-10-15 04:45:53.127389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.725 [2024-10-15 04:45:53.127440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:03.725 [2024-10-15 04:45:53.127454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.725 [2024-10-15 04:45:53.127465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.725 [2024-10-15 04:45:53.127565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.725 [2024-10-15 04:45:53.127577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:03.725 [2024-10-15 04:45:53.127588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.725 [2024-10-15 04:45:53.127597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.725 [2024-10-15 04:45:53.127633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.725 [2024-10-15 04:45:53.127644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:03.725 [2024-10-15 04:45:53.127654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.725 [2024-10-15 04:45:53.127664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.726 [2024-10-15 04:45:53.127784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.726 [2024-10-15 04:45:53.127803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:03.726 [2024-10-15 04:45:53.127833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.726 [2024-10-15 04:45:53.127844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.726 [2024-10-15 04:45:53.127882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.726 [2024-10-15 04:45:53.127894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:03.726 [2024-10-15 04:45:53.127905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.726 [2024-10-15 04:45:53.127915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.726 [2024-10-15 04:45:53.127952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.726 [2024-10-15 04:45:53.127967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:03.726 [2024-10-15 04:45:53.127978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.726 [2024-10-15 04:45:53.127987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.726 [2024-10-15 04:45:53.128027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.726 [2024-10-15 04:45:53.128038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:03.726 [2024-10-15 04:45:53.128048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.726 [2024-10-15 04:45:53.128058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.726 [2024-10-15 04:45:53.128173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.424 ms, result 0 00:22:04.662 00:22:04.662 00:22:04.920 04:45:54 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:06.820 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:06.820 04:45:55 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:06.820 [2024-10-15 04:45:55.991971] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:22:06.820 [2024-10-15 04:45:55.992121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77585 ] 00:22:06.820 [2024-10-15 04:45:56.163976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.821 [2024-10-15 04:45:56.281205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.389 [2024-10-15 04:45:56.657517] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.389 [2024-10-15 04:45:56.657593] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.389 [2024-10-15 04:45:56.819121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.389 [2024-10-15 04:45:56.819184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:07.389 [2024-10-15 04:45:56.819201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:07.389 [2024-10-15 04:45:56.819234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.389 [2024-10-15 04:45:56.819285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.389 [2024-10-15 04:45:56.819297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:07.389 [2024-10-15 04:45:56.819308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:07.389 [2024-10-15 04:45:56.819321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.389 [2024-10-15 04:45:56.819342] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:07.389 [2024-10-15 04:45:56.820250] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:07.389 [2024-10-15 04:45:56.820281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.389 [2024-10-15 04:45:56.820292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:07.389 [2024-10-15 04:45:56.820303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:22:07.389 [2024-10-15 04:45:56.820313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.389 [2024-10-15 04:45:56.821753] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:07.389 [2024-10-15 04:45:56.840698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.389 [2024-10-15 04:45:56.840741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:07.389 [2024-10-15 04:45:56.840771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.976 ms 00:22:07.389 [2024-10-15 04:45:56.840782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.389 [2024-10-15 04:45:56.840862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.389 [2024-10-15 04:45:56.840879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:07.389 [2024-10-15 04:45:56.840889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:07.389 [2024-10-15 04:45:56.840899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.389 [2024-10-15 04:45:56.847727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.389 [2024-10-15 04:45:56.847759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:07.389 [2024-10-15 04:45:56.847786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.767 ms 00:22:07.389 [2024-10-15 04:45:56.847796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.389 [2024-10-15 04:45:56.847882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.389 [2024-10-15 04:45:56.847898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:07.389 [2024-10-15 04:45:56.847908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:07.389 [2024-10-15 04:45:56.847918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.389 [2024-10-15 04:45:56.847956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.389 [2024-10-15 04:45:56.847968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:07.389 [2024-10-15 04:45:56.847978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:07.389 [2024-10-15 04:45:56.847988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.389 [2024-10-15 04:45:56.848010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:07.389 [2024-10-15 04:45:56.852698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.389 [2024-10-15 04:45:56.852730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:07.389 [2024-10-15 04:45:56.852742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.699 ms 00:22:07.389 [2024-10-15 04:45:56.852753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.389 [2024-10-15 04:45:56.852784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.389 [2024-10-15 04:45:56.852795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:07.389 [2024-10-15 04:45:56.852805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:07.389 [2024-10-15 04:45:56.852823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.389 [2024-10-15 04:45:56.852891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:07.389 [2024-10-15 04:45:56.852914] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:07.389 [2024-10-15 04:45:56.852949] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:07.390 [2024-10-15 04:45:56.852976] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:07.390 [2024-10-15 04:45:56.853066] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:07.390 [2024-10-15 04:45:56.853079] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:07.390 [2024-10-15 04:45:56.853092] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:07.390 [2024-10-15 04:45:56.853105] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:07.390 [2024-10-15 04:45:56.853117] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:07.390 [2024-10-15 04:45:56.853128] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:07.390 [2024-10-15 04:45:56.853138] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:07.390 [2024-10-15 04:45:56.853147] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:07.390 [2024-10-15 04:45:56.853157] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:07.390 [2024-10-15 04:45:56.853167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.390 [2024-10-15 04:45:56.853181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:07.390 [2024-10-15 04:45:56.853191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:22:07.390 [2024-10-15 04:45:56.853202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.390 [2024-10-15 04:45:56.853283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.390 [2024-10-15 04:45:56.853294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:07.390 [2024-10-15 04:45:56.853304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:07.390 [2024-10-15 04:45:56.853314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.390 [2024-10-15 04:45:56.853405] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:07.390 [2024-10-15 04:45:56.853435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:07.390 [2024-10-15 04:45:56.853450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.390 [2024-10-15 04:45:56.853461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:07.390 [2024-10-15 04:45:56.853481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:07.390 [2024-10-15 04:45:56.853500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:07.390 [2024-10-15 04:45:56.853510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.390 [2024-10-15 04:45:56.853531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:07.390 [2024-10-15 04:45:56.853540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:07.390 [2024-10-15 04:45:56.853548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.390 [2024-10-15 04:45:56.853557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:07.390 [2024-10-15 04:45:56.853567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:07.390 [2024-10-15 04:45:56.853585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:07.390 [2024-10-15 04:45:56.853603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:07.390 [2024-10-15 04:45:56.853612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:07.390 [2024-10-15 04:45:56.853630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.390 [2024-10-15 04:45:56.853648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:07.390 [2024-10-15 04:45:56.853658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.390 [2024-10-15 04:45:56.853676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:07.390 [2024-10-15 04:45:56.853685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.390 [2024-10-15 04:45:56.853703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:07.390 [2024-10-15 04:45:56.853712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.390 [2024-10-15 04:45:56.853730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:07.390 [2024-10-15 04:45:56.853739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.390 [2024-10-15 04:45:56.853758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:07.390 [2024-10-15 04:45:56.853767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:07.390 [2024-10-15 04:45:56.853776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.390 [2024-10-15 04:45:56.853785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:07.390 [2024-10-15 04:45:56.853794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:07.390 [2024-10-15 04:45:56.853802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:07.390 [2024-10-15 04:45:56.853838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:07.390 [2024-10-15 04:45:56.853848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853857] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:07.390 [2024-10-15 04:45:56.853867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:07.390 [2024-10-15 04:45:56.853877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.390 [2024-10-15 04:45:56.853886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.390 [2024-10-15 04:45:56.853897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:07.390 [2024-10-15 04:45:56.853906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:07.390 [2024-10-15 04:45:56.853915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:07.390 [2024-10-15 04:45:56.853924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:07.390 [2024-10-15 04:45:56.853933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:07.390 [2024-10-15 04:45:56.853942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:07.390 [2024-10-15 04:45:56.853952] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:07.390 [2024-10-15 04:45:56.853964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.390 [2024-10-15 04:45:56.853975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:07.390 [2024-10-15 04:45:56.853985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:07.390 [2024-10-15 04:45:56.853995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:07.390 [2024-10-15 04:45:56.854005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:07.390 [2024-10-15 04:45:56.854015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:07.390 [2024-10-15 04:45:56.854025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:07.390 [2024-10-15 04:45:56.854035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:07.390 [2024-10-15 04:45:56.854045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:07.390 [2024-10-15 04:45:56.854055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:07.390 [2024-10-15 04:45:56.854065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:07.390 [2024-10-15 04:45:56.854075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:07.390 [2024-10-15 04:45:56.854085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:07.390 [2024-10-15 04:45:56.854094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:07.390 [2024-10-15 04:45:56.854104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:07.390 [2024-10-15 04:45:56.854114] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:07.390 [2024-10-15 04:45:56.854125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.390 [2024-10-15 04:45:56.854139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:07.390 [2024-10-15 04:45:56.854149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:07.390 [2024-10-15 04:45:56.854160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:07.390 [2024-10-15 04:45:56.854170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:07.390 [2024-10-15 04:45:56.854181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.390 [2024-10-15 04:45:56.854191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:07.390 [2024-10-15 04:45:56.854201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:22:07.390 [2024-10-15 04:45:56.854211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.390 [2024-10-15 04:45:56.892074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.390 [2024-10-15 04:45:56.892117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:07.390 [2024-10-15 04:45:56.892147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.879 ms 00:22:07.390 [2024-10-15 04:45:56.892157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.390 [2024-10-15 04:45:56.892234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.390 [2024-10-15 04:45:56.892250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:07.390 [2024-10-15 04:45:56.892261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:07.391 [2024-10-15 04:45:56.892271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.650 [2024-10-15 04:45:56.952523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.650 [2024-10-15 04:45:56.952581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.650 [2024-10-15 04:45:56.952597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.290 ms 00:22:07.650 [2024-10-15 04:45:56.952607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.650 [2024-10-15 04:45:56.952675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.650 [2024-10-15 04:45:56.952686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.650 [2024-10-15 04:45:56.952698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:07.650 [2024-10-15 04:45:56.952708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.650 [2024-10-15 04:45:56.953220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.650 [2024-10-15 04:45:56.953243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.650 [2024-10-15 04:45:56.953254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:22:07.650 [2024-10-15 04:45:56.953272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.650 [2024-10-15 04:45:56.953393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.650 [2024-10-15 04:45:56.953413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.650 [2024-10-15 04:45:56.953425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:07.650 [2024-10-15 04:45:56.953434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.650 [2024-10-15 04:45:56.972597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.650 [2024-10-15 04:45:56.972642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.650 [2024-10-15 04:45:56.972673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.168 ms 00:22:07.650 [2024-10-15 04:45:56.972688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.650 [2024-10-15 04:45:56.991560] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:07.650 [2024-10-15 04:45:56.991604] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:07.650 [2024-10-15 04:45:56.991619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.650 [2024-10-15 04:45:56.991629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:07.650 [2024-10-15 04:45:56.991657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.841 ms 00:22:07.650 [2024-10-15 04:45:56.991667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.650 [2024-10-15 04:45:57.021287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.650 [2024-10-15 04:45:57.021348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:07.650 [2024-10-15 04:45:57.021363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.621 ms 00:22:07.650 [2024-10-15 04:45:57.021374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.650 [2024-10-15 04:45:57.040593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.650 [2024-10-15 04:45:57.040636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:07.650 [2024-10-15 04:45:57.040650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.201 ms 00:22:07.650 [2024-10-15 04:45:57.040660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.650 [2024-10-15 04:45:57.059217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.650 [2024-10-15 04:45:57.059274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:07.650 [2024-10-15 04:45:57.059305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.546 ms 00:22:07.650 [2024-10-15 04:45:57.059316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.650 [2024-10-15 04:45:57.060054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.650 [2024-10-15 04:45:57.060087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:07.650 [2024-10-15 04:45:57.060100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:22:07.651 [2024-10-15 04:45:57.060111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.651 [2024-10-15 04:45:57.145969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.651 [2024-10-15 04:45:57.146044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:07.651 [2024-10-15 04:45:57.146062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.969 ms 00:22:07.651 [2024-10-15 04:45:57.146079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.910 [2024-10-15 04:45:57.159805] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:07.910 [2024-10-15 04:45:57.163092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.910 [2024-10-15 04:45:57.163128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:07.910 [2024-10-15 04:45:57.163159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.960 ms 00:22:07.910 [2024-10-15 04:45:57.163170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.910 [2024-10-15 04:45:57.163276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.910 [2024-10-15 04:45:57.163289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:07.910 [2024-10-15 04:45:57.163301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:07.910 [2024-10-15 04:45:57.163311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.910 [2024-10-15 04:45:57.163402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.910 [2024-10-15 04:45:57.163415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:07.910 [2024-10-15 04:45:57.163425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:07.910 [2024-10-15 04:45:57.163435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.910 [2024-10-15 04:45:57.163459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.910 [2024-10-15 04:45:57.163470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:07.910 [2024-10-15 04:45:57.163480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:07.910 [2024-10-15 04:45:57.163490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.910 [2024-10-15 04:45:57.163521] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:07.910 [2024-10-15 04:45:57.163532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.910 [2024-10-15 04:45:57.163546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:07.910 [2024-10-15 04:45:57.163556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:07.910 [2024-10-15 04:45:57.163567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.910 [2024-10-15 04:45:57.200353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.910 [2024-10-15 04:45:57.200400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:07.910 [2024-10-15 04:45:57.200432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.822 ms 00:22:07.910 [2024-10-15 04:45:57.200443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.910 [2024-10-15 04:45:57.200528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.910 [2024-10-15 04:45:57.200541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:07.910 [2024-10-15 04:45:57.200552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:07.910 [2024-10-15 04:45:57.200562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.910 [2024-10-15 04:45:57.201653] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 382.725 ms, result 0 00:22:08.895  [2024-10-15T04:45:59.338Z] Copying: 26/1024 [MB] (26 MBps) [2024-10-15T04:46:00.275Z] Copying: 53/1024 [MB] (27 MBps) [2024-10-15T04:46:01.212Z] Copying: 80/1024 [MB] (27 MBps) [2024-10-15T04:46:02.591Z] Copying: 107/1024 [MB] (27 MBps) [2024-10-15T04:46:03.527Z] Copying: 135/1024 [MB] (27 MBps) [2024-10-15T04:46:04.462Z] Copying: 162/1024 [MB] (27 MBps) [2024-10-15T04:46:05.399Z] Copying: 189/1024 [MB] (26 MBps) [2024-10-15T04:46:06.336Z] Copying: 215/1024 [MB] (25 MBps) [2024-10-15T04:46:07.273Z] Copying: 241/1024 [MB] (26 MBps) [2024-10-15T04:46:08.208Z] Copying: 267/1024 [MB] (25 MBps) [2024-10-15T04:46:09.624Z] Copying: 294/1024 [MB] (27 MBps) [2024-10-15T04:46:10.192Z] Copying: 321/1024 [MB] (27 MBps) [2024-10-15T04:46:11.568Z] Copying: 347/1024 [MB] (25 MBps) [2024-10-15T04:46:12.504Z] Copying: 372/1024 [MB] (25 MBps) [2024-10-15T04:46:13.438Z] Copying: 398/1024 [MB] (26 MBps) [2024-10-15T04:46:14.373Z] Copying: 425/1024 [MB] (26 MBps) [2024-10-15T04:46:15.309Z] Copying: 451/1024 [MB] (25 MBps) [2024-10-15T04:46:16.245Z] Copying: 476/1024 [MB] (25 MBps) [2024-10-15T04:46:17.182Z] Copying: 502/1024 [MB] (25 MBps) [2024-10-15T04:46:18.560Z] Copying: 528/1024 [MB] (25 MBps) [2024-10-15T04:46:19.497Z] Copying: 554/1024 [MB] (26 MBps) [2024-10-15T04:46:20.432Z] Copying: 581/1024 [MB] (26 MBps) [2024-10-15T04:46:21.368Z] Copying: 607/1024 [MB] (26 MBps) [2024-10-15T04:46:22.306Z] Copying: 634/1024 [MB] (26 MBps) [2024-10-15T04:46:23.242Z] Copying: 661/1024 [MB] (27 MBps) [2024-10-15T04:46:24.178Z] Copying: 688/1024 [MB] (26 MBps) [2024-10-15T04:46:25.556Z] Copying: 714/1024 [MB] (26 MBps) [2024-10-15T04:46:26.493Z] Copying: 739/1024 [MB] (25 MBps) [2024-10-15T04:46:27.430Z] Copying: 766/1024 [MB] (26 MBps) [2024-10-15T04:46:28.365Z] Copying: 791/1024 [MB] (25 MBps) [2024-10-15T04:46:29.301Z] Copying: 817/1024 [MB] (25 MBps) [2024-10-15T04:46:30.238Z] Copying: 843/1024 [MB] (25 MBps) [2024-10-15T04:46:31.196Z] Copying: 868/1024 [MB] (25 MBps) [2024-10-15T04:46:32.575Z] Copying: 895/1024 [MB] (26 MBps) [2024-10-15T04:46:33.511Z] Copying: 921/1024 [MB] (25 MBps) [2024-10-15T04:46:34.448Z] Copying: 947/1024 [MB] (25 MBps) [2024-10-15T04:46:35.384Z] Copying: 974/1024 [MB] (27 MBps) [2024-10-15T04:46:36.328Z] Copying: 1001/1024 [MB] (26 MBps) [2024-10-15T04:46:36.895Z] Copying: 1023/1024 [MB] (22 MBps) [2024-10-15T04:46:36.895Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-10-15 04:46:36.819522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.391 [2024-10-15 04:46:36.819590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:47.391 [2024-10-15 04:46:36.819606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:47.391 [2024-10-15 04:46:36.819617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.391 [2024-10-15 04:46:36.821173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:47.391 [2024-10-15 04:46:36.826556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.391 [2024-10-15 04:46:36.826612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:47.391 [2024-10-15 04:46:36.826643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.346 ms 00:22:47.391 [2024-10-15 04:46:36.826657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.391 [2024-10-15 04:46:36.837811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.391 [2024-10-15 04:46:36.837866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:47.391 [2024-10-15 04:46:36.837880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.499 ms 00:22:47.391 [2024-10-15 04:46:36.837891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.391 [2024-10-15 04:46:36.862024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.391 [2024-10-15 04:46:36.862078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:47.391 [2024-10-15 04:46:36.862094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.153 ms 00:22:47.391 [2024-10-15 04:46:36.862105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.391 [2024-10-15 04:46:36.867270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.391 [2024-10-15 04:46:36.867313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:47.391 [2024-10-15 04:46:36.867326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.137 ms 00:22:47.391 [2024-10-15 04:46:36.867337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.650 [2024-10-15 04:46:36.905303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.650 [2024-10-15 04:46:36.905356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:47.650 [2024-10-15 04:46:36.905372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.971 ms 00:22:47.650 [2024-10-15 04:46:36.905383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.650 [2024-10-15 04:46:36.926467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.650 [2024-10-15 04:46:36.926517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:47.650 [2024-10-15 04:46:36.926537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.078 ms 00:22:47.650 [2024-10-15 04:46:36.926548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.650 [2024-10-15 04:46:37.049343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.650 [2024-10-15 04:46:37.049436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:47.650 [2024-10-15 04:46:37.049456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 122.947 ms 00:22:47.650 [2024-10-15 04:46:37.049469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.650 [2024-10-15 04:46:37.087338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.650 [2024-10-15 04:46:37.087398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:47.650 [2024-10-15 04:46:37.087414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.911 ms 00:22:47.650 [2024-10-15 04:46:37.087424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.650 [2024-10-15 04:46:37.123655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.650 [2024-10-15 04:46:37.123864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:47.650 [2024-10-15 04:46:37.123879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.249 ms 00:22:47.650 [2024-10-15 04:46:37.123889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.910 [2024-10-15 04:46:37.159624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.910 [2024-10-15 04:46:37.159673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:47.910 [2024-10-15 04:46:37.159687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.753 ms 00:22:47.910 [2024-10-15 04:46:37.159697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.910 [2024-10-15 04:46:37.195293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.910 [2024-10-15 04:46:37.195338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:47.910 [2024-10-15 04:46:37.195368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.570 ms 00:22:47.910 [2024-10-15 04:46:37.195378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.910 [2024-10-15 04:46:37.195412] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:47.910 [2024-10-15 04:46:37.195429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115712 / 261120 wr_cnt: 1 state: open 00:22:47.910 [2024-10-15 04:46:37.195443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:47.910 [2024-10-15 04:46:37.195454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:47.910 [2024-10-15 04:46:37.195465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:47.910 [2024-10-15 04:46:37.195476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:47.910 [2024-10-15 04:46:37.195487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:47.910 [2024-10-15 04:46:37.195497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:47.910 [2024-10-15 04:46:37.195508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:47.910 [2024-10-15 04:46:37.195518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:47.910 [2024-10-15 04:46:37.195529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:47.910 [2024-10-15 04:46:37.195540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:47.910 [2024-10-15 04:46:37.195550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:47.910 [2024-10-15 04:46:37.195560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.195996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.196990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.197006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:47.911 [2024-10-15 04:46:37.197038] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:47.911 [2024-10-15 04:46:37.197055] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ecc6ff85-7abb-45d9-8cbd-e91e1968175e 00:22:47.911 [2024-10-15 04:46:37.197069] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115712 00:22:47.911 [2024-10-15 04:46:37.197082] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116672 00:22:47.911 [2024-10-15 04:46:37.197098] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115712 00:22:47.911 [2024-10-15 04:46:37.197117] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:22:47.911 [2024-10-15 04:46:37.197134] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:47.911 [2024-10-15 04:46:37.197154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:47.911 [2024-10-15 04:46:37.197188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:47.911 [2024-10-15 04:46:37.197201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:47.911 [2024-10-15 04:46:37.197215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:47.912 [2024-10-15 04:46:37.197233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.912 [2024-10-15 04:46:37.197252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:47.912 [2024-10-15 04:46:37.197281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:22:47.912 [2024-10-15 04:46:37.197300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.912 [2024-10-15 04:46:37.215353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.912 [2024-10-15 04:46:37.215397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:47.912 [2024-10-15 04:46:37.215414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.025 ms 00:22:47.912 [2024-10-15 04:46:37.215427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.912 [2024-10-15 04:46:37.215951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.912 [2024-10-15 04:46:37.215990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:47.912 [2024-10-15 04:46:37.216004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:22:47.912 [2024-10-15 04:46:37.216015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.912 [2024-10-15 04:46:37.267946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.912 [2024-10-15 04:46:37.268017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:47.912 [2024-10-15 04:46:37.268054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.912 [2024-10-15 04:46:37.268065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.912 [2024-10-15 04:46:37.268136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.912 [2024-10-15 04:46:37.268147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:47.912 [2024-10-15 04:46:37.268157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.912 [2024-10-15 04:46:37.268167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.912 [2024-10-15 04:46:37.268248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.912 [2024-10-15 04:46:37.268262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:47.912 [2024-10-15 04:46:37.268273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.912 [2024-10-15 04:46:37.268287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.912 [2024-10-15 04:46:37.268304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.912 [2024-10-15 04:46:37.268315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:47.912 [2024-10-15 04:46:37.268324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.912 [2024-10-15 04:46:37.268335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.912 [2024-10-15 04:46:37.393706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.912 [2024-10-15 04:46:37.393766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:47.912 [2024-10-15 04:46:37.393782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.912 [2024-10-15 04:46:37.393799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.171 [2024-10-15 04:46:37.497438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.171 [2024-10-15 04:46:37.497508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:48.171 [2024-10-15 04:46:37.497524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.171 [2024-10-15 04:46:37.497535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.171 [2024-10-15 04:46:37.497624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.171 [2024-10-15 04:46:37.497637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:48.171 [2024-10-15 04:46:37.497649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.171 [2024-10-15 04:46:37.497659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.171 [2024-10-15 04:46:37.497714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.171 [2024-10-15 04:46:37.497727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:48.171 [2024-10-15 04:46:37.497737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.171 [2024-10-15 04:46:37.497747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.171 [2024-10-15 04:46:37.497881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.171 [2024-10-15 04:46:37.497905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:48.171 [2024-10-15 04:46:37.497925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.171 [2024-10-15 04:46:37.497942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.171 [2024-10-15 04:46:37.497987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.171 [2024-10-15 04:46:37.498016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:48.171 [2024-10-15 04:46:37.498028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.171 [2024-10-15 04:46:37.498038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.171 [2024-10-15 04:46:37.498090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.171 [2024-10-15 04:46:37.498110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:48.171 [2024-10-15 04:46:37.498125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.171 [2024-10-15 04:46:37.498140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.171 [2024-10-15 04:46:37.498192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.171 [2024-10-15 04:46:37.498214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:48.171 [2024-10-15 04:46:37.498233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.171 [2024-10-15 04:46:37.498252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.171 [2024-10-15 04:46:37.498411] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 681.729 ms, result 0 00:22:49.548 00:22:49.548 00:22:49.807 04:46:39 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:49.807 [2024-10-15 04:46:39.151240] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:22:49.807 [2024-10-15 04:46:39.151370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78026 ] 00:22:50.066 [2024-10-15 04:46:39.324081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.067 [2024-10-15 04:46:39.440525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.325 [2024-10-15 04:46:39.812133] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:50.325 [2024-10-15 04:46:39.812209] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:50.585 [2024-10-15 04:46:39.973715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.585 [2024-10-15 04:46:39.973776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:50.585 [2024-10-15 04:46:39.973792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:50.585 [2024-10-15 04:46:39.973808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.585 [2024-10-15 04:46:39.973869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.585 [2024-10-15 04:46:39.973882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.585 [2024-10-15 04:46:39.973893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:50.585 [2024-10-15 04:46:39.973906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.585 [2024-10-15 04:46:39.973928] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:50.585 [2024-10-15 04:46:39.974963] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:50.585 [2024-10-15 04:46:39.975006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.585 [2024-10-15 04:46:39.975020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.586 [2024-10-15 04:46:39.975034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:22:50.586 [2024-10-15 04:46:39.975047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.586 [2024-10-15 04:46:39.976641] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:50.586 [2024-10-15 04:46:39.995957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.586 [2024-10-15 04:46:39.996011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:50.586 [2024-10-15 04:46:39.996027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.349 ms 00:22:50.586 [2024-10-15 04:46:39.996038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.586 [2024-10-15 04:46:39.996115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.586 [2024-10-15 04:46:39.996132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:50.586 [2024-10-15 04:46:39.996143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:50.586 [2024-10-15 04:46:39.996154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.586 [2024-10-15 04:46:40.003836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.586 [2024-10-15 04:46:40.003890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.586 [2024-10-15 04:46:40.003906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.616 ms 00:22:50.586 [2024-10-15 04:46:40.003920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.586 [2024-10-15 04:46:40.004031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.586 [2024-10-15 04:46:40.004048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.586 [2024-10-15 04:46:40.004062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:50.586 [2024-10-15 04:46:40.004075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.586 [2024-10-15 04:46:40.004130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.586 [2024-10-15 04:46:40.004144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:50.586 [2024-10-15 04:46:40.004158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:50.586 [2024-10-15 04:46:40.004170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.586 [2024-10-15 04:46:40.004200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:50.586 [2024-10-15 04:46:40.009230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.586 [2024-10-15 04:46:40.009275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.586 [2024-10-15 04:46:40.009289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.045 ms 00:22:50.586 [2024-10-15 04:46:40.009299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.586 [2024-10-15 04:46:40.009337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.586 [2024-10-15 04:46:40.009349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:50.586 [2024-10-15 04:46:40.009360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:50.586 [2024-10-15 04:46:40.009370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.586 [2024-10-15 04:46:40.009433] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:50.586 [2024-10-15 04:46:40.009457] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:50.586 [2024-10-15 04:46:40.009497] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:50.586 [2024-10-15 04:46:40.009533] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:50.586 [2024-10-15 04:46:40.009634] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:50.586 [2024-10-15 04:46:40.009651] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:50.586 [2024-10-15 04:46:40.009669] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:50.586 [2024-10-15 04:46:40.009689] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:50.586 [2024-10-15 04:46:40.009709] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:50.586 [2024-10-15 04:46:40.009728] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:50.586 [2024-10-15 04:46:40.009747] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:50.586 [2024-10-15 04:46:40.009763] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:50.586 [2024-10-15 04:46:40.009777] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:50.586 [2024-10-15 04:46:40.009791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.586 [2024-10-15 04:46:40.009812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:50.586 [2024-10-15 04:46:40.009843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:22:50.586 [2024-10-15 04:46:40.009856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.586 [2024-10-15 04:46:40.009961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.586 [2024-10-15 04:46:40.009977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:50.586 [2024-10-15 04:46:40.009990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:50.586 [2024-10-15 04:46:40.010005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.586 [2024-10-15 04:46:40.010119] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:50.586 [2024-10-15 04:46:40.010158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:50.586 [2024-10-15 04:46:40.010185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.586 [2024-10-15 04:46:40.010202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:50.586 [2024-10-15 04:46:40.010235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:50.586 [2024-10-15 04:46:40.010267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:50.586 [2024-10-15 04:46:40.010280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.586 [2024-10-15 04:46:40.010307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:50.586 [2024-10-15 04:46:40.010323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:50.586 [2024-10-15 04:46:40.010340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.586 [2024-10-15 04:46:40.010354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:50.586 [2024-10-15 04:46:40.010367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:50.586 [2024-10-15 04:46:40.010390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:50.586 [2024-10-15 04:46:40.010414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:50.586 [2024-10-15 04:46:40.010426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:50.586 [2024-10-15 04:46:40.010456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.586 [2024-10-15 04:46:40.010490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:50.586 [2024-10-15 04:46:40.010506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.586 [2024-10-15 04:46:40.010540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:50.586 [2024-10-15 04:46:40.010557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.586 [2024-10-15 04:46:40.010585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:50.586 [2024-10-15 04:46:40.010597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.586 [2024-10-15 04:46:40.010621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:50.586 [2024-10-15 04:46:40.010636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.586 [2024-10-15 04:46:40.010666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:50.586 [2024-10-15 04:46:40.010682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:50.586 [2024-10-15 04:46:40.010698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.586 [2024-10-15 04:46:40.010710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:50.586 [2024-10-15 04:46:40.010722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:50.586 [2024-10-15 04:46:40.010736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:50.586 [2024-10-15 04:46:40.010772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:50.586 [2024-10-15 04:46:40.010789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010806] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:50.586 [2024-10-15 04:46:40.010846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:50.586 [2024-10-15 04:46:40.010862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.586 [2024-10-15 04:46:40.010875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.586 [2024-10-15 04:46:40.010887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:50.586 [2024-10-15 04:46:40.010900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:50.586 [2024-10-15 04:46:40.010913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:50.586 [2024-10-15 04:46:40.010931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:50.586 [2024-10-15 04:46:40.010947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:50.587 [2024-10-15 04:46:40.010958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:50.587 [2024-10-15 04:46:40.010970] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:50.587 [2024-10-15 04:46:40.010986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.587 [2024-10-15 04:46:40.011008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:50.587 [2024-10-15 04:46:40.011028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:50.587 [2024-10-15 04:46:40.011047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:50.587 [2024-10-15 04:46:40.011065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:50.587 [2024-10-15 04:46:40.011085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:50.587 [2024-10-15 04:46:40.011102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:50.587 [2024-10-15 04:46:40.011119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:50.587 [2024-10-15 04:46:40.011133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:50.587 [2024-10-15 04:46:40.011146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:50.587 [2024-10-15 04:46:40.011161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:50.587 [2024-10-15 04:46:40.011179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:50.587 [2024-10-15 04:46:40.011195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:50.587 [2024-10-15 04:46:40.011209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:50.587 [2024-10-15 04:46:40.011223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:50.587 [2024-10-15 04:46:40.011239] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:50.587 [2024-10-15 04:46:40.011257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.587 [2024-10-15 04:46:40.011285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:50.587 [2024-10-15 04:46:40.011306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:50.587 [2024-10-15 04:46:40.011326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:50.587 [2024-10-15 04:46:40.011342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:50.587 [2024-10-15 04:46:40.011362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.587 [2024-10-15 04:46:40.011378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:50.587 [2024-10-15 04:46:40.011394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:22:50.587 [2024-10-15 04:46:40.011407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.587 [2024-10-15 04:46:40.050423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.587 [2024-10-15 04:46:40.050471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:50.587 [2024-10-15 04:46:40.050487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.013 ms 00:22:50.587 [2024-10-15 04:46:40.050498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.587 [2024-10-15 04:46:40.050586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.587 [2024-10-15 04:46:40.050602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:50.587 [2024-10-15 04:46:40.050613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:50.587 [2024-10-15 04:46:40.050623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.116727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.116777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:50.847 [2024-10-15 04:46:40.116791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.141 ms 00:22:50.847 [2024-10-15 04:46:40.116802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.116877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.116890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:50.847 [2024-10-15 04:46:40.116901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:50.847 [2024-10-15 04:46:40.116912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.117505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.117548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:50.847 [2024-10-15 04:46:40.117572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:22:50.847 [2024-10-15 04:46:40.117590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.117754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.117785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:50.847 [2024-10-15 04:46:40.117806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:22:50.847 [2024-10-15 04:46:40.117839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.137387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.137430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:50.847 [2024-10-15 04:46:40.137444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.532 ms 00:22:50.847 [2024-10-15 04:46:40.137458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.157003] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:50.847 [2024-10-15 04:46:40.157047] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:50.847 [2024-10-15 04:46:40.157063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.157073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:50.847 [2024-10-15 04:46:40.157085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.518 ms 00:22:50.847 [2024-10-15 04:46:40.157096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.186901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.186967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:50.847 [2024-10-15 04:46:40.186982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.808 ms 00:22:50.847 [2024-10-15 04:46:40.186994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.206020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.206091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:50.847 [2024-10-15 04:46:40.206106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.011 ms 00:22:50.847 [2024-10-15 04:46:40.206116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.225097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.225140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:50.847 [2024-10-15 04:46:40.225154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.967 ms 00:22:50.847 [2024-10-15 04:46:40.225164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.226015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.226054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:50.847 [2024-10-15 04:46:40.226067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:22:50.847 [2024-10-15 04:46:40.226077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.313352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.313425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:50.847 [2024-10-15 04:46:40.313442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.386 ms 00:22:50.847 [2024-10-15 04:46:40.313460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.324761] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:50.847 [2024-10-15 04:46:40.328014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.328056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:50.847 [2024-10-15 04:46:40.328072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.510 ms 00:22:50.847 [2024-10-15 04:46:40.328082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.328183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.328196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:50.847 [2024-10-15 04:46:40.328208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:50.847 [2024-10-15 04:46:40.328218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.329926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.329971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:50.847 [2024-10-15 04:46:40.329986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:22:50.847 [2024-10-15 04:46:40.329999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.330044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.330059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:50.847 [2024-10-15 04:46:40.330072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:50.847 [2024-10-15 04:46:40.330084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.847 [2024-10-15 04:46:40.330125] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:50.847 [2024-10-15 04:46:40.330139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.847 [2024-10-15 04:46:40.330157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:50.847 [2024-10-15 04:46:40.330174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:50.847 [2024-10-15 04:46:40.330192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.107 [2024-10-15 04:46:40.366175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.107 [2024-10-15 04:46:40.366226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:51.107 [2024-10-15 04:46:40.366241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.007 ms 00:22:51.107 [2024-10-15 04:46:40.366252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.107 [2024-10-15 04:46:40.366337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.107 [2024-10-15 04:46:40.366350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:51.107 [2024-10-15 04:46:40.366361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:51.107 [2024-10-15 04:46:40.366371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.107 [2024-10-15 04:46:40.367634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 394.114 ms, result 0 00:22:52.484  [2024-10-15T04:46:42.926Z] Copying: 25/1024 [MB] (25 MBps) [2024-10-15T04:46:43.863Z] Copying: 54/1024 [MB] (28 MBps) [2024-10-15T04:46:44.800Z] Copying: 82/1024 [MB] (27 MBps) [2024-10-15T04:46:45.774Z] Copying: 111/1024 [MB] (28 MBps) [2024-10-15T04:46:46.712Z] Copying: 140/1024 [MB] (28 MBps) [2024-10-15T04:46:47.648Z] Copying: 169/1024 [MB] (29 MBps) [2024-10-15T04:46:48.583Z] Copying: 196/1024 [MB] (27 MBps) [2024-10-15T04:46:49.962Z] Copying: 223/1024 [MB] (26 MBps) [2024-10-15T04:46:50.899Z] Copying: 250/1024 [MB] (26 MBps) [2024-10-15T04:46:51.835Z] Copying: 278/1024 [MB] (28 MBps) [2024-10-15T04:46:52.798Z] Copying: 306/1024 [MB] (28 MBps) [2024-10-15T04:46:53.739Z] Copying: 333/1024 [MB] (27 MBps) [2024-10-15T04:46:54.675Z] Copying: 363/1024 [MB] (29 MBps) [2024-10-15T04:46:55.612Z] Copying: 392/1024 [MB] (29 MBps) [2024-10-15T04:46:56.989Z] Copying: 421/1024 [MB] (28 MBps) [2024-10-15T04:46:57.923Z] Copying: 447/1024 [MB] (26 MBps) [2024-10-15T04:46:58.872Z] Copying: 474/1024 [MB] (26 MBps) [2024-10-15T04:46:59.809Z] Copying: 500/1024 [MB] (26 MBps) [2024-10-15T04:47:00.745Z] Copying: 528/1024 [MB] (27 MBps) [2024-10-15T04:47:01.683Z] Copying: 555/1024 [MB] (27 MBps) [2024-10-15T04:47:02.618Z] Copying: 582/1024 [MB] (26 MBps) [2024-10-15T04:47:03.554Z] Copying: 610/1024 [MB] (27 MBps) [2024-10-15T04:47:04.932Z] Copying: 637/1024 [MB] (27 MBps) [2024-10-15T04:47:05.897Z] Copying: 663/1024 [MB] (26 MBps) [2024-10-15T04:47:06.830Z] Copying: 691/1024 [MB] (27 MBps) [2024-10-15T04:47:07.766Z] Copying: 720/1024 [MB] (28 MBps) [2024-10-15T04:47:08.702Z] Copying: 748/1024 [MB] (28 MBps) [2024-10-15T04:47:09.637Z] Copying: 776/1024 [MB] (27 MBps) [2024-10-15T04:47:10.574Z] Copying: 803/1024 [MB] (26 MBps) [2024-10-15T04:47:11.955Z] Copying: 831/1024 [MB] (27 MBps) [2024-10-15T04:47:12.890Z] Copying: 858/1024 [MB] (27 MBps) [2024-10-15T04:47:13.826Z] Copying: 886/1024 [MB] (28 MBps) [2024-10-15T04:47:14.763Z] Copying: 914/1024 [MB] (27 MBps) [2024-10-15T04:47:15.701Z] Copying: 941/1024 [MB] (26 MBps) [2024-10-15T04:47:16.638Z] Copying: 968/1024 [MB] (26 MBps) [2024-10-15T04:47:17.575Z] Copying: 995/1024 [MB] (26 MBps) [2024-10-15T04:47:17.834Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-15 04:47:17.611695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.330 [2024-10-15 04:47:17.611781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:28.330 [2024-10-15 04:47:17.611807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:28.330 [2024-10-15 04:47:17.611859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.330 [2024-10-15 04:47:17.611899] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:28.330 [2024-10-15 04:47:17.618092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.330 [2024-10-15 04:47:17.618168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:28.330 [2024-10-15 04:47:17.618188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.176 ms 00:23:28.330 [2024-10-15 04:47:17.618202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.330 [2024-10-15 04:47:17.618482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.330 [2024-10-15 04:47:17.618500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:28.330 [2024-10-15 04:47:17.618515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:23:28.330 [2024-10-15 04:47:17.618530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.330 [2024-10-15 04:47:17.623985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.330 [2024-10-15 04:47:17.624045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:28.330 [2024-10-15 04:47:17.624063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.440 ms 00:23:28.330 [2024-10-15 04:47:17.624078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.330 [2024-10-15 04:47:17.629691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.330 [2024-10-15 04:47:17.629728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:28.330 [2024-10-15 04:47:17.629757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.577 ms 00:23:28.330 [2024-10-15 04:47:17.629767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.330 [2024-10-15 04:47:17.666205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.330 [2024-10-15 04:47:17.666250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:28.330 [2024-10-15 04:47:17.666280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.447 ms 00:23:28.330 [2024-10-15 04:47:17.666291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.330 [2024-10-15 04:47:17.687032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.330 [2024-10-15 04:47:17.687073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:28.330 [2024-10-15 04:47:17.687092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.736 ms 00:23:28.330 [2024-10-15 04:47:17.687103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.330 [2024-10-15 04:47:17.817894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.330 [2024-10-15 04:47:17.817957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:28.330 [2024-10-15 04:47:17.817975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 130.942 ms 00:23:28.330 [2024-10-15 04:47:17.817987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.600 [2024-10-15 04:47:17.855547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.600 [2024-10-15 04:47:17.855599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:28.600 [2024-10-15 04:47:17.855614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.599 ms 00:23:28.600 [2024-10-15 04:47:17.855631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.600 [2024-10-15 04:47:17.891864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.600 [2024-10-15 04:47:17.891915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:28.600 [2024-10-15 04:47:17.891943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.248 ms 00:23:28.600 [2024-10-15 04:47:17.891952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.600 [2024-10-15 04:47:17.928161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.600 [2024-10-15 04:47:17.928214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:28.600 [2024-10-15 04:47:17.928228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.225 ms 00:23:28.600 [2024-10-15 04:47:17.928238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.600 [2024-10-15 04:47:17.964369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.600 [2024-10-15 04:47:17.964433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:28.600 [2024-10-15 04:47:17.964449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.104 ms 00:23:28.600 [2024-10-15 04:47:17.964459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.600 [2024-10-15 04:47:17.964520] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:28.600 [2024-10-15 04:47:17.964539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:28.600 [2024-10-15 04:47:17.964553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.964994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:28.600 [2024-10-15 04:47:17.965275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:28.601 [2024-10-15 04:47:17.965629] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:28.601 [2024-10-15 04:47:17.965639] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ecc6ff85-7abb-45d9-8cbd-e91e1968175e 00:23:28.601 [2024-10-15 04:47:17.965651] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:28.601 [2024-10-15 04:47:17.965661] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16320 00:23:28.601 [2024-10-15 04:47:17.965671] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15360 00:23:28.601 [2024-10-15 04:47:17.965681] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0625 00:23:28.601 [2024-10-15 04:47:17.965691] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:28.601 [2024-10-15 04:47:17.965701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:28.601 [2024-10-15 04:47:17.965717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:28.601 [2024-10-15 04:47:17.965738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:28.601 [2024-10-15 04:47:17.965748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:28.601 [2024-10-15 04:47:17.965758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.601 [2024-10-15 04:47:17.965768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:28.601 [2024-10-15 04:47:17.965778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:23:28.601 [2024-10-15 04:47:17.965788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.601 [2024-10-15 04:47:17.986000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.601 [2024-10-15 04:47:17.986050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:28.601 [2024-10-15 04:47:17.986065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.187 ms 00:23:28.601 [2024-10-15 04:47:17.986075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.601 [2024-10-15 04:47:17.986607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.601 [2024-10-15 04:47:17.986623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:28.601 [2024-10-15 04:47:17.986635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:23:28.601 [2024-10-15 04:47:17.986645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.601 [2024-10-15 04:47:18.037887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.601 [2024-10-15 04:47:18.037937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:28.601 [2024-10-15 04:47:18.037956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.601 [2024-10-15 04:47:18.037967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.601 [2024-10-15 04:47:18.038026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.601 [2024-10-15 04:47:18.038037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:28.601 [2024-10-15 04:47:18.038048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.601 [2024-10-15 04:47:18.038058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.601 [2024-10-15 04:47:18.038146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.601 [2024-10-15 04:47:18.038160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:28.601 [2024-10-15 04:47:18.038170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.601 [2024-10-15 04:47:18.038180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.601 [2024-10-15 04:47:18.038201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.601 [2024-10-15 04:47:18.038211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:28.601 [2024-10-15 04:47:18.038221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.601 [2024-10-15 04:47:18.038231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.859 [2024-10-15 04:47:18.161916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.859 [2024-10-15 04:47:18.162002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:28.859 [2024-10-15 04:47:18.162016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.859 [2024-10-15 04:47:18.162034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.859 [2024-10-15 04:47:18.261494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.859 [2024-10-15 04:47:18.261551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:28.859 [2024-10-15 04:47:18.261565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.859 [2024-10-15 04:47:18.261577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.859 [2024-10-15 04:47:18.261669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.859 [2024-10-15 04:47:18.261682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:28.859 [2024-10-15 04:47:18.261693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.859 [2024-10-15 04:47:18.261703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.859 [2024-10-15 04:47:18.261743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.859 [2024-10-15 04:47:18.261755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:28.859 [2024-10-15 04:47:18.261765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.859 [2024-10-15 04:47:18.261775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.859 [2024-10-15 04:47:18.261911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.860 [2024-10-15 04:47:18.261925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:28.860 [2024-10-15 04:47:18.261936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.860 [2024-10-15 04:47:18.261946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.860 [2024-10-15 04:47:18.261980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.860 [2024-10-15 04:47:18.261997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:28.860 [2024-10-15 04:47:18.262007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.860 [2024-10-15 04:47:18.262017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.860 [2024-10-15 04:47:18.262067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.860 [2024-10-15 04:47:18.262080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:28.860 [2024-10-15 04:47:18.262091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.860 [2024-10-15 04:47:18.262100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.860 [2024-10-15 04:47:18.262146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.860 [2024-10-15 04:47:18.262158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:28.860 [2024-10-15 04:47:18.262168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.860 [2024-10-15 04:47:18.262178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.860 [2024-10-15 04:47:18.262285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 651.628 ms, result 0 00:23:29.819 00:23:29.819 00:23:30.079 04:47:19 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:31.991 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:31.991 Process with pid 76522 is not found 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76522 00:23:31.991 04:47:21 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76522 ']' 00:23:31.991 04:47:21 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76522 00:23:31.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76522) - No such process 00:23:31.991 04:47:21 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 76522 is not found' 00:23:31.991 Remove shared memory files 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:31.991 04:47:21 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:31.991 00:23:31.991 real 3m4.521s 00:23:31.991 user 2m51.819s 00:23:31.991 sys 0m13.813s 00:23:31.991 04:47:21 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:31.991 04:47:21 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:31.991 ************************************ 00:23:31.991 END TEST ftl_restore 00:23:31.991 ************************************ 00:23:31.991 04:47:21 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:31.991 04:47:21 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:31.991 04:47:21 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.991 04:47:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:31.991 ************************************ 00:23:31.991 START TEST ftl_dirty_shutdown 00:23:31.991 ************************************ 00:23:31.992 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:31.992 * Looking for test storage... 00:23:31.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:31.992 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:31.992 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:23:31.992 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:31.992 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:31.992 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.992 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.992 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.992 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:32.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.251 --rc genhtml_branch_coverage=1 00:23:32.251 --rc genhtml_function_coverage=1 00:23:32.251 --rc genhtml_legend=1 00:23:32.251 --rc geninfo_all_blocks=1 00:23:32.251 --rc geninfo_unexecuted_blocks=1 00:23:32.251 00:23:32.251 ' 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:32.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.251 --rc genhtml_branch_coverage=1 00:23:32.251 --rc genhtml_function_coverage=1 00:23:32.251 --rc genhtml_legend=1 00:23:32.251 --rc geninfo_all_blocks=1 00:23:32.251 --rc geninfo_unexecuted_blocks=1 00:23:32.251 00:23:32.251 ' 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:32.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.251 --rc genhtml_branch_coverage=1 00:23:32.251 --rc genhtml_function_coverage=1 00:23:32.251 --rc genhtml_legend=1 00:23:32.251 --rc geninfo_all_blocks=1 00:23:32.251 --rc geninfo_unexecuted_blocks=1 00:23:32.251 00:23:32.251 ' 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:32.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.251 --rc genhtml_branch_coverage=1 00:23:32.251 --rc genhtml_function_coverage=1 00:23:32.251 --rc genhtml_legend=1 00:23:32.251 --rc geninfo_all_blocks=1 00:23:32.251 --rc geninfo_unexecuted_blocks=1 00:23:32.251 00:23:32.251 ' 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:32.251 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78519 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78519 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78519 ']' 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.252 04:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:32.252 [2024-10-15 04:47:21.652702] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:23:32.252 [2024-10-15 04:47:21.652858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78519 ] 00:23:32.511 [2024-10-15 04:47:21.809788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.511 [2024-10-15 04:47:21.921212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.449 04:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.449 04:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:33.449 04:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:33.449 04:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:33.449 04:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:33.449 04:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:33.449 04:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:33.449 04:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:33.707 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:33.707 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:33.707 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:33.707 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:33.707 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:33.707 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:33.707 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:33.707 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:33.965 { 00:23:33.965 "name": "nvme0n1", 00:23:33.965 "aliases": [ 00:23:33.965 "bb116a82-9435-4031-b7c5-b88938db74fc" 00:23:33.965 ], 00:23:33.965 "product_name": "NVMe disk", 00:23:33.965 "block_size": 4096, 00:23:33.965 "num_blocks": 1310720, 00:23:33.965 "uuid": "bb116a82-9435-4031-b7c5-b88938db74fc", 00:23:33.965 "numa_id": -1, 00:23:33.965 "assigned_rate_limits": { 00:23:33.965 "rw_ios_per_sec": 0, 00:23:33.965 "rw_mbytes_per_sec": 0, 00:23:33.965 "r_mbytes_per_sec": 0, 00:23:33.965 "w_mbytes_per_sec": 0 00:23:33.965 }, 00:23:33.965 "claimed": true, 00:23:33.965 "claim_type": "read_many_write_one", 00:23:33.965 "zoned": false, 00:23:33.965 "supported_io_types": { 00:23:33.965 "read": true, 00:23:33.965 "write": true, 00:23:33.965 "unmap": true, 00:23:33.965 "flush": true, 00:23:33.965 "reset": true, 00:23:33.965 "nvme_admin": true, 00:23:33.965 "nvme_io": true, 00:23:33.965 "nvme_io_md": false, 00:23:33.965 "write_zeroes": true, 00:23:33.965 "zcopy": false, 00:23:33.965 "get_zone_info": false, 00:23:33.965 "zone_management": false, 00:23:33.965 "zone_append": false, 00:23:33.965 "compare": true, 00:23:33.965 "compare_and_write": false, 00:23:33.965 "abort": true, 00:23:33.965 "seek_hole": false, 00:23:33.965 "seek_data": false, 00:23:33.965 "copy": true, 00:23:33.965 "nvme_iov_md": false 00:23:33.965 }, 00:23:33.965 "driver_specific": { 00:23:33.965 "nvme": [ 00:23:33.965 { 00:23:33.965 "pci_address": "0000:00:11.0", 00:23:33.965 "trid": { 00:23:33.965 "trtype": "PCIe", 00:23:33.965 "traddr": "0000:00:11.0" 00:23:33.965 }, 00:23:33.965 "ctrlr_data": { 00:23:33.965 "cntlid": 0, 00:23:33.965 "vendor_id": "0x1b36", 00:23:33.965 "model_number": "QEMU NVMe Ctrl", 00:23:33.965 "serial_number": "12341", 00:23:33.965 "firmware_revision": "8.0.0", 00:23:33.965 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:33.965 "oacs": { 00:23:33.965 "security": 0, 00:23:33.965 "format": 1, 00:23:33.965 "firmware": 0, 00:23:33.965 "ns_manage": 1 00:23:33.965 }, 00:23:33.965 "multi_ctrlr": false, 00:23:33.965 "ana_reporting": false 00:23:33.965 }, 00:23:33.965 "vs": { 00:23:33.965 "nvme_version": "1.4" 00:23:33.965 }, 00:23:33.965 "ns_data": { 00:23:33.965 "id": 1, 00:23:33.965 "can_share": false 00:23:33.965 } 00:23:33.965 } 00:23:33.965 ], 00:23:33.965 "mp_policy": "active_passive" 00:23:33.965 } 00:23:33.965 } 00:23:33.965 ]' 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:33.965 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:34.224 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=297b7b3e-1d3c-4a72-89ce-7223e976ccdc 00:23:34.224 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:34.224 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 297b7b3e-1d3c-4a72-89ce-7223e976ccdc 00:23:34.483 04:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:34.742 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a8e05932-182e-4c17-8781-8bfe1c81fef8 00:23:34.742 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a8e05932-182e-4c17-8781-8bfe1c81fef8 00:23:34.742 04:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=0808869f-2082-4890-8154-b4d1a22869ca 00:23:34.742 04:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:34.742 04:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0808869f-2082-4890-8154-b4d1a22869ca 00:23:34.742 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:34.742 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:34.743 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=0808869f-2082-4890-8154-b4d1a22869ca 00:23:34.743 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:34.743 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0808869f-2082-4890-8154-b4d1a22869ca 00:23:34.743 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=0808869f-2082-4890-8154-b4d1a22869ca 00:23:34.743 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:34.743 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:34.743 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:34.743 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0808869f-2082-4890-8154-b4d1a22869ca 00:23:35.002 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:35.002 { 00:23:35.002 "name": "0808869f-2082-4890-8154-b4d1a22869ca", 00:23:35.002 "aliases": [ 00:23:35.002 "lvs/nvme0n1p0" 00:23:35.002 ], 00:23:35.002 "product_name": "Logical Volume", 00:23:35.002 "block_size": 4096, 00:23:35.002 "num_blocks": 26476544, 00:23:35.002 "uuid": "0808869f-2082-4890-8154-b4d1a22869ca", 00:23:35.002 "assigned_rate_limits": { 00:23:35.002 "rw_ios_per_sec": 0, 00:23:35.002 "rw_mbytes_per_sec": 0, 00:23:35.002 "r_mbytes_per_sec": 0, 00:23:35.002 "w_mbytes_per_sec": 0 00:23:35.002 }, 00:23:35.002 "claimed": false, 00:23:35.002 "zoned": false, 00:23:35.002 "supported_io_types": { 00:23:35.002 "read": true, 00:23:35.002 "write": true, 00:23:35.002 "unmap": true, 00:23:35.002 "flush": false, 00:23:35.002 "reset": true, 00:23:35.002 "nvme_admin": false, 00:23:35.002 "nvme_io": false, 00:23:35.002 "nvme_io_md": false, 00:23:35.002 "write_zeroes": true, 00:23:35.002 "zcopy": false, 00:23:35.002 "get_zone_info": false, 00:23:35.002 "zone_management": false, 00:23:35.002 "zone_append": false, 00:23:35.002 "compare": false, 00:23:35.002 "compare_and_write": false, 00:23:35.002 "abort": false, 00:23:35.002 "seek_hole": true, 00:23:35.002 "seek_data": true, 00:23:35.002 "copy": false, 00:23:35.002 "nvme_iov_md": false 00:23:35.002 }, 00:23:35.002 "driver_specific": { 00:23:35.002 "lvol": { 00:23:35.002 "lvol_store_uuid": "a8e05932-182e-4c17-8781-8bfe1c81fef8", 00:23:35.002 "base_bdev": "nvme0n1", 00:23:35.002 "thin_provision": true, 00:23:35.002 "num_allocated_clusters": 0, 00:23:35.002 "snapshot": false, 00:23:35.002 "clone": false, 00:23:35.002 "esnap_clone": false 00:23:35.002 } 00:23:35.002 } 00:23:35.002 } 00:23:35.002 ]' 00:23:35.002 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:35.002 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:35.002 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:35.261 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:35.261 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:35.261 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:35.261 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:35.261 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:35.261 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:35.520 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:35.520 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:35.520 04:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 0808869f-2082-4890-8154-b4d1a22869ca 00:23:35.520 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=0808869f-2082-4890-8154-b4d1a22869ca 00:23:35.520 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:35.520 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:35.520 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:35.520 04:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0808869f-2082-4890-8154-b4d1a22869ca 00:23:35.779 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:35.779 { 00:23:35.779 "name": "0808869f-2082-4890-8154-b4d1a22869ca", 00:23:35.779 "aliases": [ 00:23:35.779 "lvs/nvme0n1p0" 00:23:35.779 ], 00:23:35.779 "product_name": "Logical Volume", 00:23:35.779 "block_size": 4096, 00:23:35.779 "num_blocks": 26476544, 00:23:35.779 "uuid": "0808869f-2082-4890-8154-b4d1a22869ca", 00:23:35.779 "assigned_rate_limits": { 00:23:35.779 "rw_ios_per_sec": 0, 00:23:35.779 "rw_mbytes_per_sec": 0, 00:23:35.779 "r_mbytes_per_sec": 0, 00:23:35.779 "w_mbytes_per_sec": 0 00:23:35.779 }, 00:23:35.779 "claimed": false, 00:23:35.779 "zoned": false, 00:23:35.779 "supported_io_types": { 00:23:35.779 "read": true, 00:23:35.779 "write": true, 00:23:35.780 "unmap": true, 00:23:35.780 "flush": false, 00:23:35.780 "reset": true, 00:23:35.780 "nvme_admin": false, 00:23:35.780 "nvme_io": false, 00:23:35.780 "nvme_io_md": false, 00:23:35.780 "write_zeroes": true, 00:23:35.780 "zcopy": false, 00:23:35.780 "get_zone_info": false, 00:23:35.780 "zone_management": false, 00:23:35.780 "zone_append": false, 00:23:35.780 "compare": false, 00:23:35.780 "compare_and_write": false, 00:23:35.780 "abort": false, 00:23:35.780 "seek_hole": true, 00:23:35.780 "seek_data": true, 00:23:35.780 "copy": false, 00:23:35.780 "nvme_iov_md": false 00:23:35.780 }, 00:23:35.780 "driver_specific": { 00:23:35.780 "lvol": { 00:23:35.780 "lvol_store_uuid": "a8e05932-182e-4c17-8781-8bfe1c81fef8", 00:23:35.780 "base_bdev": "nvme0n1", 00:23:35.780 "thin_provision": true, 00:23:35.780 "num_allocated_clusters": 0, 00:23:35.780 "snapshot": false, 00:23:35.780 "clone": false, 00:23:35.780 "esnap_clone": false 00:23:35.780 } 00:23:35.780 } 00:23:35.780 } 00:23:35.780 ]' 00:23:35.780 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:35.780 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:35.780 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:35.780 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:35.780 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:35.780 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:35.780 04:47:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:35.780 04:47:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:36.039 04:47:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:36.039 04:47:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 0808869f-2082-4890-8154-b4d1a22869ca 00:23:36.039 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=0808869f-2082-4890-8154-b4d1a22869ca 00:23:36.039 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:36.039 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:36.039 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:36.039 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0808869f-2082-4890-8154-b4d1a22869ca 00:23:36.039 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:36.039 { 00:23:36.039 "name": "0808869f-2082-4890-8154-b4d1a22869ca", 00:23:36.039 "aliases": [ 00:23:36.039 "lvs/nvme0n1p0" 00:23:36.039 ], 00:23:36.039 "product_name": "Logical Volume", 00:23:36.039 "block_size": 4096, 00:23:36.039 "num_blocks": 26476544, 00:23:36.039 "uuid": "0808869f-2082-4890-8154-b4d1a22869ca", 00:23:36.039 "assigned_rate_limits": { 00:23:36.039 "rw_ios_per_sec": 0, 00:23:36.039 "rw_mbytes_per_sec": 0, 00:23:36.039 "r_mbytes_per_sec": 0, 00:23:36.039 "w_mbytes_per_sec": 0 00:23:36.039 }, 00:23:36.039 "claimed": false, 00:23:36.039 "zoned": false, 00:23:36.039 "supported_io_types": { 00:23:36.039 "read": true, 00:23:36.039 "write": true, 00:23:36.039 "unmap": true, 00:23:36.039 "flush": false, 00:23:36.039 "reset": true, 00:23:36.039 "nvme_admin": false, 00:23:36.039 "nvme_io": false, 00:23:36.039 "nvme_io_md": false, 00:23:36.039 "write_zeroes": true, 00:23:36.039 "zcopy": false, 00:23:36.039 "get_zone_info": false, 00:23:36.039 "zone_management": false, 00:23:36.039 "zone_append": false, 00:23:36.039 "compare": false, 00:23:36.039 "compare_and_write": false, 00:23:36.039 "abort": false, 00:23:36.039 "seek_hole": true, 00:23:36.039 "seek_data": true, 00:23:36.039 "copy": false, 00:23:36.039 "nvme_iov_md": false 00:23:36.039 }, 00:23:36.039 "driver_specific": { 00:23:36.039 "lvol": { 00:23:36.039 "lvol_store_uuid": "a8e05932-182e-4c17-8781-8bfe1c81fef8", 00:23:36.039 "base_bdev": "nvme0n1", 00:23:36.039 "thin_provision": true, 00:23:36.039 "num_allocated_clusters": 0, 00:23:36.039 "snapshot": false, 00:23:36.039 "clone": false, 00:23:36.039 "esnap_clone": false 00:23:36.039 } 00:23:36.039 } 00:23:36.039 } 00:23:36.039 ]' 00:23:36.039 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:36.324 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:36.324 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:36.324 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:36.324 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:36.324 04:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:36.324 04:47:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:36.324 04:47:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0808869f-2082-4890-8154-b4d1a22869ca --l2p_dram_limit 10' 00:23:36.324 04:47:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:36.324 04:47:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:36.324 04:47:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:36.324 04:47:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0808869f-2082-4890-8154-b4d1a22869ca --l2p_dram_limit 10 -c nvc0n1p0 00:23:36.324 [2024-10-15 04:47:25.807795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.324 [2024-10-15 04:47:25.808048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.324 [2024-10-15 04:47:25.808079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.324 [2024-10-15 04:47:25.808091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.324 [2024-10-15 04:47:25.808184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.324 [2024-10-15 04:47:25.808201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.324 [2024-10-15 04:47:25.808214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:36.324 [2024-10-15 04:47:25.808224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.324 [2024-10-15 04:47:25.808249] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.324 [2024-10-15 04:47:25.809211] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.324 [2024-10-15 04:47:25.809250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.324 [2024-10-15 04:47:25.809261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.324 [2024-10-15 04:47:25.809274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:23:36.324 [2024-10-15 04:47:25.809292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.324 [2024-10-15 04:47:25.809372] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f313f300-eab4-4dd8-837e-ccea041cb153 00:23:36.595 [2024-10-15 04:47:25.810773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-10-15 04:47:25.810809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:36.595 [2024-10-15 04:47:25.810833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:36.595 [2024-10-15 04:47:25.810848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-10-15 04:47:25.818226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-10-15 04:47:25.818259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.595 [2024-10-15 04:47:25.818272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.348 ms 00:23:36.595 [2024-10-15 04:47:25.818288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-10-15 04:47:25.818383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-10-15 04:47:25.818400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.595 [2024-10-15 04:47:25.818411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:36.595 [2024-10-15 04:47:25.818428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-10-15 04:47:25.818493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-10-15 04:47:25.818507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.595 [2024-10-15 04:47:25.818518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:36.595 [2024-10-15 04:47:25.818531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-10-15 04:47:25.818559] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.595 [2024-10-15 04:47:25.823557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-10-15 04:47:25.823699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.595 [2024-10-15 04:47:25.823724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.014 ms 00:23:36.595 [2024-10-15 04:47:25.823740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-10-15 04:47:25.823780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-10-15 04:47:25.823791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.595 [2024-10-15 04:47:25.823804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:36.595 [2024-10-15 04:47:25.823835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-10-15 04:47:25.823874] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:36.595 [2024-10-15 04:47:25.824011] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.595 [2024-10-15 04:47:25.824031] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.595 [2024-10-15 04:47:25.824044] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:36.595 [2024-10-15 04:47:25.824060] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.595 [2024-10-15 04:47:25.824072] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.595 [2024-10-15 04:47:25.824086] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:36.595 [2024-10-15 04:47:25.824096] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.595 [2024-10-15 04:47:25.824108] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.595 [2024-10-15 04:47:25.824118] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.595 [2024-10-15 04:47:25.824134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-10-15 04:47:25.824144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.595 [2024-10-15 04:47:25.824157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:23:36.595 [2024-10-15 04:47:25.824177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-10-15 04:47:25.824255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-10-15 04:47:25.824266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.595 [2024-10-15 04:47:25.824278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:36.595 [2024-10-15 04:47:25.824288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-10-15 04:47:25.824375] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.595 [2024-10-15 04:47:25.824390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.596 [2024-10-15 04:47:25.824402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.596 [2024-10-15 04:47:25.824413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.596 [2024-10-15 04:47:25.824435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:36.596 [2024-10-15 04:47:25.824456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.596 [2024-10-15 04:47:25.824468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.596 [2024-10-15 04:47:25.824488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.596 [2024-10-15 04:47:25.824498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:36.596 [2024-10-15 04:47:25.824510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.596 [2024-10-15 04:47:25.824519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.596 [2024-10-15 04:47:25.824530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:36.596 [2024-10-15 04:47:25.824540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.596 [2024-10-15 04:47:25.824563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:36.596 [2024-10-15 04:47:25.824574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.596 [2024-10-15 04:47:25.824597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.596 [2024-10-15 04:47:25.824617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.596 [2024-10-15 04:47:25.824627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.596 [2024-10-15 04:47:25.824647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.596 [2024-10-15 04:47:25.824658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.596 [2024-10-15 04:47:25.824679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.596 [2024-10-15 04:47:25.824688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.596 [2024-10-15 04:47:25.824711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.596 [2024-10-15 04:47:25.824725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.596 [2024-10-15 04:47:25.824745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.596 [2024-10-15 04:47:25.824754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:36.596 [2024-10-15 04:47:25.824766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.596 [2024-10-15 04:47:25.824776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.596 [2024-10-15 04:47:25.824787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:36.596 [2024-10-15 04:47:25.824796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.596 [2024-10-15 04:47:25.824829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:36.596 [2024-10-15 04:47:25.824841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824850] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.596 [2024-10-15 04:47:25.824862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.596 [2024-10-15 04:47:25.824872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.596 [2024-10-15 04:47:25.824884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.596 [2024-10-15 04:47:25.824895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.596 [2024-10-15 04:47:25.824911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.596 [2024-10-15 04:47:25.824921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.596 [2024-10-15 04:47:25.824933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.596 [2024-10-15 04:47:25.824942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.596 [2024-10-15 04:47:25.824954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.596 [2024-10-15 04:47:25.824967] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.596 [2024-10-15 04:47:25.824982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.596 [2024-10-15 04:47:25.824993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:36.596 [2024-10-15 04:47:25.825006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:36.596 [2024-10-15 04:47:25.825016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:36.596 [2024-10-15 04:47:25.825030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:36.596 [2024-10-15 04:47:25.825040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:36.596 [2024-10-15 04:47:25.825053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:36.596 [2024-10-15 04:47:25.825063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:36.596 [2024-10-15 04:47:25.825077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:36.596 [2024-10-15 04:47:25.825087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:36.596 [2024-10-15 04:47:25.825103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:36.596 [2024-10-15 04:47:25.825113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:36.596 [2024-10-15 04:47:25.825125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:36.596 [2024-10-15 04:47:25.825135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:36.596 [2024-10-15 04:47:25.825148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:36.596 [2024-10-15 04:47:25.825158] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.596 [2024-10-15 04:47:25.825172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.596 [2024-10-15 04:47:25.825187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.596 [2024-10-15 04:47:25.825200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.596 [2024-10-15 04:47:25.825210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.596 [2024-10-15 04:47:25.825223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.596 [2024-10-15 04:47:25.825234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.596 [2024-10-15 04:47:25.825247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.596 [2024-10-15 04:47:25.825257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:23:36.596 [2024-10-15 04:47:25.825270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.596 [2024-10-15 04:47:25.825324] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:36.596 [2024-10-15 04:47:25.825342] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:39.887 [2024-10-15 04:47:29.321302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.887 [2024-10-15 04:47:29.321570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:39.887 [2024-10-15 04:47:29.321673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3501.648 ms 00:23:39.887 [2024-10-15 04:47:29.321716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.887 [2024-10-15 04:47:29.359768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.887 [2024-10-15 04:47:29.360023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:39.887 [2024-10-15 04:47:29.360116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.751 ms 00:23:39.887 [2024-10-15 04:47:29.360158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.887 [2024-10-15 04:47:29.360326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.887 [2024-10-15 04:47:29.360491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:39.887 [2024-10-15 04:47:29.360560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:39.887 [2024-10-15 04:47:29.360596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.146 [2024-10-15 04:47:29.404715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.146 [2024-10-15 04:47:29.404920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.146 [2024-10-15 04:47:29.405010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.106 ms 00:23:40.146 [2024-10-15 04:47:29.405053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.146 [2024-10-15 04:47:29.405111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.146 [2024-10-15 04:47:29.405146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.146 [2024-10-15 04:47:29.405176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:40.146 [2024-10-15 04:47:29.405270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.146 [2024-10-15 04:47:29.405798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.146 [2024-10-15 04:47:29.405933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.146 [2024-10-15 04:47:29.406015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:23:40.146 [2024-10-15 04:47:29.406054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.146 [2024-10-15 04:47:29.406183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.146 [2024-10-15 04:47:29.406219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.146 [2024-10-15 04:47:29.406296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:40.146 [2024-10-15 04:47:29.406337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.146 [2024-10-15 04:47:29.426048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.146 [2024-10-15 04:47:29.426193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.146 [2024-10-15 04:47:29.426306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.695 ms 00:23:40.146 [2024-10-15 04:47:29.426350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.146 [2024-10-15 04:47:29.451330] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:40.146 [2024-10-15 04:47:29.454743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.146 [2024-10-15 04:47:29.454889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.146 [2024-10-15 04:47:29.454995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.328 ms 00:23:40.146 [2024-10-15 04:47:29.455031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.146 [2024-10-15 04:47:29.544644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.146 [2024-10-15 04:47:29.544911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:40.146 [2024-10-15 04:47:29.545008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.688 ms 00:23:40.146 [2024-10-15 04:47:29.545046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.146 [2024-10-15 04:47:29.545250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.146 [2024-10-15 04:47:29.545512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.146 [2024-10-15 04:47:29.545599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:23:40.146 [2024-10-15 04:47:29.545632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.146 [2024-10-15 04:47:29.582095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.146 [2024-10-15 04:47:29.582260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:40.146 [2024-10-15 04:47:29.582346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.440 ms 00:23:40.146 [2024-10-15 04:47:29.582363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.146 [2024-10-15 04:47:29.618984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.146 [2024-10-15 04:47:29.619197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:40.146 [2024-10-15 04:47:29.619227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.549 ms 00:23:40.146 [2024-10-15 04:47:29.619238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.146 [2024-10-15 04:47:29.620020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.146 [2024-10-15 04:47:29.620045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.146 [2024-10-15 04:47:29.620059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:23:40.146 [2024-10-15 04:47:29.620071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.405 [2024-10-15 04:47:29.723765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.405 [2024-10-15 04:47:29.723841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:40.405 [2024-10-15 04:47:29.723882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.791 ms 00:23:40.405 [2024-10-15 04:47:29.723892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.405 [2024-10-15 04:47:29.763385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.405 [2024-10-15 04:47:29.763446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:40.405 [2024-10-15 04:47:29.763470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.441 ms 00:23:40.405 [2024-10-15 04:47:29.763481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.405 [2024-10-15 04:47:29.801598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.405 [2024-10-15 04:47:29.801651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:40.405 [2024-10-15 04:47:29.801669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.116 ms 00:23:40.405 [2024-10-15 04:47:29.801679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.405 [2024-10-15 04:47:29.838439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.405 [2024-10-15 04:47:29.838482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:40.405 [2024-10-15 04:47:29.838499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.757 ms 00:23:40.405 [2024-10-15 04:47:29.838526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.405 [2024-10-15 04:47:29.838575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.405 [2024-10-15 04:47:29.838587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:40.405 [2024-10-15 04:47:29.838604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:40.405 [2024-10-15 04:47:29.838613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.405 [2024-10-15 04:47:29.838717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.405 [2024-10-15 04:47:29.838729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:40.405 [2024-10-15 04:47:29.838742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:40.405 [2024-10-15 04:47:29.838753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.405 [2024-10-15 04:47:29.839786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4038.131 ms, result 0 00:23:40.405 { 00:23:40.405 "name": "ftl0", 00:23:40.405 "uuid": "f313f300-eab4-4dd8-837e-ccea041cb153" 00:23:40.405 } 00:23:40.405 04:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:40.405 04:47:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:40.664 04:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:40.664 04:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:40.664 04:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:40.929 /dev/nbd0 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:40.929 1+0 records in 00:23:40.929 1+0 records out 00:23:40.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679182 s, 6.0 MB/s 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:23:40.929 04:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:41.188 [2024-10-15 04:47:30.446534] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:23:41.189 [2024-10-15 04:47:30.446674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78661 ] 00:23:41.189 [2024-10-15 04:47:30.619423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.447 [2024-10-15 04:47:30.735768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.823  [2024-10-15T04:47:33.263Z] Copying: 201/1024 [MB] (201 MBps) [2024-10-15T04:47:34.200Z] Copying: 404/1024 [MB] (202 MBps) [2024-10-15T04:47:35.134Z] Copying: 605/1024 [MB] (201 MBps) [2024-10-15T04:47:36.068Z] Copying: 805/1024 [MB] (199 MBps) [2024-10-15T04:47:36.326Z] Copying: 1000/1024 [MB] (194 MBps) [2024-10-15T04:47:37.702Z] Copying: 1024/1024 [MB] (average 200 MBps) 00:23:48.198 00:23:48.198 04:47:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:49.576 04:47:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:49.835 [2024-10-15 04:47:39.130876] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:23:49.835 [2024-10-15 04:47:39.131005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78756 ] 00:23:49.835 [2024-10-15 04:47:39.300226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.094 [2024-10-15 04:47:39.415350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.550  [2024-10-15T04:47:42.023Z] Copying: 17/1024 [MB] (17 MBps) [2024-10-15T04:47:42.960Z] Copying: 35/1024 [MB] (17 MBps) [2024-10-15T04:47:43.897Z] Copying: 52/1024 [MB] (17 MBps) [2024-10-15T04:47:44.833Z] Copying: 70/1024 [MB] (17 MBps) [2024-10-15T04:47:45.769Z] Copying: 87/1024 [MB] (17 MBps) [2024-10-15T04:47:47.148Z] Copying: 104/1024 [MB] (17 MBps) [2024-10-15T04:47:48.091Z] Copying: 121/1024 [MB] (17 MBps) [2024-10-15T04:47:49.028Z] Copying: 138/1024 [MB] (17 MBps) [2024-10-15T04:47:49.965Z] Copying: 156/1024 [MB] (17 MBps) [2024-10-15T04:47:50.902Z] Copying: 173/1024 [MB] (17 MBps) [2024-10-15T04:47:51.839Z] Copying: 190/1024 [MB] (17 MBps) [2024-10-15T04:47:52.777Z] Copying: 207/1024 [MB] (16 MBps) [2024-10-15T04:47:53.772Z] Copying: 224/1024 [MB] (17 MBps) [2024-10-15T04:47:55.151Z] Copying: 241/1024 [MB] (17 MBps) [2024-10-15T04:47:55.719Z] Copying: 258/1024 [MB] (16 MBps) [2024-10-15T04:47:57.098Z] Copying: 275/1024 [MB] (17 MBps) [2024-10-15T04:47:58.034Z] Copying: 292/1024 [MB] (16 MBps) [2024-10-15T04:47:58.971Z] Copying: 309/1024 [MB] (17 MBps) [2024-10-15T04:47:59.909Z] Copying: 326/1024 [MB] (17 MBps) [2024-10-15T04:48:00.846Z] Copying: 343/1024 [MB] (17 MBps) [2024-10-15T04:48:01.783Z] Copying: 361/1024 [MB] (17 MBps) [2024-10-15T04:48:02.721Z] Copying: 378/1024 [MB] (16 MBps) [2024-10-15T04:48:04.099Z] Copying: 395/1024 [MB] (17 MBps) [2024-10-15T04:48:05.037Z] Copying: 413/1024 [MB] (17 MBps) [2024-10-15T04:48:05.974Z] Copying: 430/1024 [MB] (17 MBps) [2024-10-15T04:48:06.911Z] Copying: 447/1024 [MB] (16 MBps) [2024-10-15T04:48:07.850Z] Copying: 463/1024 [MB] (16 MBps) [2024-10-15T04:48:08.801Z] Copying: 480/1024 [MB] (16 MBps) [2024-10-15T04:48:09.736Z] Copying: 497/1024 [MB] (16 MBps) [2024-10-15T04:48:11.111Z] Copying: 514/1024 [MB] (17 MBps) [2024-10-15T04:48:12.049Z] Copying: 531/1024 [MB] (17 MBps) [2024-10-15T04:48:12.986Z] Copying: 549/1024 [MB] (17 MBps) [2024-10-15T04:48:13.923Z] Copying: 566/1024 [MB] (17 MBps) [2024-10-15T04:48:14.860Z] Copying: 584/1024 [MB] (17 MBps) [2024-10-15T04:48:15.797Z] Copying: 601/1024 [MB] (17 MBps) [2024-10-15T04:48:16.733Z] Copying: 618/1024 [MB] (17 MBps) [2024-10-15T04:48:18.110Z] Copying: 635/1024 [MB] (16 MBps) [2024-10-15T04:48:19.046Z] Copying: 652/1024 [MB] (16 MBps) [2024-10-15T04:48:19.983Z] Copying: 669/1024 [MB] (16 MBps) [2024-10-15T04:48:20.920Z] Copying: 685/1024 [MB] (16 MBps) [2024-10-15T04:48:21.856Z] Copying: 702/1024 [MB] (16 MBps) [2024-10-15T04:48:22.862Z] Copying: 719/1024 [MB] (16 MBps) [2024-10-15T04:48:23.799Z] Copying: 736/1024 [MB] (17 MBps) [2024-10-15T04:48:24.736Z] Copying: 753/1024 [MB] (17 MBps) [2024-10-15T04:48:25.672Z] Copying: 770/1024 [MB] (17 MBps) [2024-10-15T04:48:27.049Z] Copying: 787/1024 [MB] (17 MBps) [2024-10-15T04:48:27.985Z] Copying: 804/1024 [MB] (16 MBps) [2024-10-15T04:48:28.921Z] Copying: 820/1024 [MB] (16 MBps) [2024-10-15T04:48:29.859Z] Copying: 837/1024 [MB] (16 MBps) [2024-10-15T04:48:30.795Z] Copying: 853/1024 [MB] (16 MBps) [2024-10-15T04:48:31.731Z] Copying: 870/1024 [MB] (16 MBps) [2024-10-15T04:48:32.668Z] Copying: 887/1024 [MB] (16 MBps) [2024-10-15T04:48:34.046Z] Copying: 903/1024 [MB] (16 MBps) [2024-10-15T04:48:34.983Z] Copying: 920/1024 [MB] (16 MBps) [2024-10-15T04:48:35.919Z] Copying: 937/1024 [MB] (17 MBps) [2024-10-15T04:48:36.856Z] Copying: 954/1024 [MB] (16 MBps) [2024-10-15T04:48:37.791Z] Copying: 971/1024 [MB] (16 MBps) [2024-10-15T04:48:38.749Z] Copying: 988/1024 [MB] (17 MBps) [2024-10-15T04:48:39.686Z] Copying: 1005/1024 [MB] (16 MBps) [2024-10-15T04:48:39.945Z] Copying: 1022/1024 [MB] (17 MBps) [2024-10-15T04:48:40.881Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:24:51.377 00:24:51.377 04:48:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:51.377 04:48:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:51.636 04:48:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:51.896 [2024-10-15 04:48:41.273366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.896 [2024-10-15 04:48:41.273436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:51.896 [2024-10-15 04:48:41.273469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:51.896 [2024-10-15 04:48:41.273483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.896 [2024-10-15 04:48:41.273530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:51.896 [2024-10-15 04:48:41.277712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.896 [2024-10-15 04:48:41.277902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:51.896 [2024-10-15 04:48:41.277934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.159 ms 00:24:51.896 [2024-10-15 04:48:41.277945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.896 [2024-10-15 04:48:41.280007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.896 [2024-10-15 04:48:41.280050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:51.896 [2024-10-15 04:48:41.280067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.013 ms 00:24:51.896 [2024-10-15 04:48:41.280077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.896 [2024-10-15 04:48:41.298042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.896 [2024-10-15 04:48:41.298095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:51.896 [2024-10-15 04:48:41.298117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.966 ms 00:24:51.896 [2024-10-15 04:48:41.298128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.896 [2024-10-15 04:48:41.303224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.896 [2024-10-15 04:48:41.303265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:51.896 [2024-10-15 04:48:41.303281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.052 ms 00:24:51.896 [2024-10-15 04:48:41.303290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.896 [2024-10-15 04:48:41.341701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.896 [2024-10-15 04:48:41.341783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:51.896 [2024-10-15 04:48:41.341805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.349 ms 00:24:51.896 [2024-10-15 04:48:41.341830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.896 [2024-10-15 04:48:41.365155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.896 [2024-10-15 04:48:41.365224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:51.896 [2024-10-15 04:48:41.365245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.271 ms 00:24:51.896 [2024-10-15 04:48:41.365255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.896 [2024-10-15 04:48:41.365478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.896 [2024-10-15 04:48:41.365494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:51.896 [2024-10-15 04:48:41.365508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:24:51.896 [2024-10-15 04:48:41.365528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.156 [2024-10-15 04:48:41.404349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.156 [2024-10-15 04:48:41.404411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:52.156 [2024-10-15 04:48:41.404429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.859 ms 00:24:52.156 [2024-10-15 04:48:41.404440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.156 [2024-10-15 04:48:41.440703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.156 [2024-10-15 04:48:41.440742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:52.156 [2024-10-15 04:48:41.440758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.252 ms 00:24:52.156 [2024-10-15 04:48:41.440784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.156 [2024-10-15 04:48:41.476238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.156 [2024-10-15 04:48:41.476274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:52.156 [2024-10-15 04:48:41.476291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.452 ms 00:24:52.156 [2024-10-15 04:48:41.476301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.156 [2024-10-15 04:48:41.511535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.156 [2024-10-15 04:48:41.511571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:52.157 [2024-10-15 04:48:41.511586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.185 ms 00:24:52.157 [2024-10-15 04:48:41.511597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.157 [2024-10-15 04:48:41.511640] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:52.157 [2024-10-15 04:48:41.511656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.511996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:52.157 [2024-10-15 04:48:41.512775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:52.158 [2024-10-15 04:48:41.512788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:52.158 [2024-10-15 04:48:41.512799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:52.158 [2024-10-15 04:48:41.512824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:52.158 [2024-10-15 04:48:41.512836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:52.158 [2024-10-15 04:48:41.512849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:52.158 [2024-10-15 04:48:41.512860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:52.158 [2024-10-15 04:48:41.512872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:52.158 [2024-10-15 04:48:41.512883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:52.158 [2024-10-15 04:48:41.512897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:52.158 [2024-10-15 04:48:41.512915] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:52.158 [2024-10-15 04:48:41.512928] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f313f300-eab4-4dd8-837e-ccea041cb153 00:24:52.158 [2024-10-15 04:48:41.512939] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:52.158 [2024-10-15 04:48:41.512957] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:52.158 [2024-10-15 04:48:41.512967] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:52.158 [2024-10-15 04:48:41.512980] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:52.158 [2024-10-15 04:48:41.512990] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:52.158 [2024-10-15 04:48:41.513005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:52.158 [2024-10-15 04:48:41.513015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:52.158 [2024-10-15 04:48:41.513027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:52.158 [2024-10-15 04:48:41.513036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:52.158 [2024-10-15 04:48:41.513047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.158 [2024-10-15 04:48:41.513057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:52.158 [2024-10-15 04:48:41.513070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:24:52.158 [2024-10-15 04:48:41.513080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.158 [2024-10-15 04:48:41.533055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.158 [2024-10-15 04:48:41.533199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:52.158 [2024-10-15 04:48:41.533224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.949 ms 00:24:52.158 [2024-10-15 04:48:41.533238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.158 [2024-10-15 04:48:41.533761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.158 [2024-10-15 04:48:41.533776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:52.158 [2024-10-15 04:48:41.533789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:24:52.158 [2024-10-15 04:48:41.533799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.158 [2024-10-15 04:48:41.598005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.158 [2024-10-15 04:48:41.598046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:52.158 [2024-10-15 04:48:41.598065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.158 [2024-10-15 04:48:41.598091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.158 [2024-10-15 04:48:41.598151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.158 [2024-10-15 04:48:41.598170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:52.158 [2024-10-15 04:48:41.598184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.158 [2024-10-15 04:48:41.598194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.158 [2024-10-15 04:48:41.598300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.158 [2024-10-15 04:48:41.598318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:52.158 [2024-10-15 04:48:41.598331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.158 [2024-10-15 04:48:41.598344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.158 [2024-10-15 04:48:41.598369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.158 [2024-10-15 04:48:41.598380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:52.158 [2024-10-15 04:48:41.598392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.158 [2024-10-15 04:48:41.598402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.417 [2024-10-15 04:48:41.719473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.417 [2024-10-15 04:48:41.719535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:52.418 [2024-10-15 04:48:41.719552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.418 [2024-10-15 04:48:41.719582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.418 [2024-10-15 04:48:41.819458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.418 [2024-10-15 04:48:41.819504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:52.418 [2024-10-15 04:48:41.819522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.418 [2024-10-15 04:48:41.819533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.418 [2024-10-15 04:48:41.819641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.418 [2024-10-15 04:48:41.819653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:52.418 [2024-10-15 04:48:41.819666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.418 [2024-10-15 04:48:41.819677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.418 [2024-10-15 04:48:41.819741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.418 [2024-10-15 04:48:41.819753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:52.418 [2024-10-15 04:48:41.819766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.418 [2024-10-15 04:48:41.819776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.418 [2024-10-15 04:48:41.819905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.418 [2024-10-15 04:48:41.819920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:52.418 [2024-10-15 04:48:41.819933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.418 [2024-10-15 04:48:41.819943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.418 [2024-10-15 04:48:41.819986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.418 [2024-10-15 04:48:41.820001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:52.418 [2024-10-15 04:48:41.820014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.418 [2024-10-15 04:48:41.820023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.418 [2024-10-15 04:48:41.820064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.418 [2024-10-15 04:48:41.820075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:52.418 [2024-10-15 04:48:41.820088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.418 [2024-10-15 04:48:41.820098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.418 [2024-10-15 04:48:41.820149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.418 [2024-10-15 04:48:41.820161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:52.418 [2024-10-15 04:48:41.820173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.418 [2024-10-15 04:48:41.820183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.418 [2024-10-15 04:48:41.820351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 547.862 ms, result 0 00:24:52.418 true 00:24:52.418 04:48:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78519 00:24:52.418 04:48:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78519 00:24:52.418 04:48:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:52.677 [2024-10-15 04:48:41.941950] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:24:52.677 [2024-10-15 04:48:41.942070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79396 ] 00:24:52.677 [2024-10-15 04:48:42.112987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.935 [2024-10-15 04:48:42.224402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.313  [2024-10-15T04:48:44.754Z] Copying: 203/1024 [MB] (203 MBps) [2024-10-15T04:48:45.691Z] Copying: 409/1024 [MB] (206 MBps) [2024-10-15T04:48:46.627Z] Copying: 616/1024 [MB] (206 MBps) [2024-10-15T04:48:47.565Z] Copying: 819/1024 [MB] (203 MBps) [2024-10-15T04:48:47.565Z] Copying: 1020/1024 [MB] (201 MBps) [2024-10-15T04:48:48.943Z] Copying: 1024/1024 [MB] (average 204 MBps) 00:24:59.439 00:24:59.439 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78519 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:59.439 04:48:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:59.439 [2024-10-15 04:48:48.767218] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:24:59.439 [2024-10-15 04:48:48.767341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79465 ] 00:24:59.439 [2024-10-15 04:48:48.940198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.698 [2024-10-15 04:48:49.056928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.958 [2024-10-15 04:48:49.422209] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:59.958 [2024-10-15 04:48:49.422291] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:00.216 [2024-10-15 04:48:49.488494] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:00.216 [2024-10-15 04:48:49.488883] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:00.216 [2024-10-15 04:48:49.489158] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:00.476 [2024-10-15 04:48:49.804763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.804837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:00.476 [2024-10-15 04:48:49.804854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:00.476 [2024-10-15 04:48:49.804866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.804922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.804934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:00.476 [2024-10-15 04:48:49.804945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:00.476 [2024-10-15 04:48:49.804955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.804977] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:00.476 [2024-10-15 04:48:49.805918] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:00.476 [2024-10-15 04:48:49.805950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.805962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:00.476 [2024-10-15 04:48:49.805973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:25:00.476 [2024-10-15 04:48:49.805982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.807394] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:00.476 [2024-10-15 04:48:49.826466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.826506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:00.476 [2024-10-15 04:48:49.826526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.103 ms 00:25:00.476 [2024-10-15 04:48:49.826537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.826606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.826623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:00.476 [2024-10-15 04:48:49.826634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:00.476 [2024-10-15 04:48:49.826644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.833560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.833761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:00.476 [2024-10-15 04:48:49.833786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.835 ms 00:25:00.476 [2024-10-15 04:48:49.833797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.833901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.833916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:00.476 [2024-10-15 04:48:49.833927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:00.476 [2024-10-15 04:48:49.833937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.833986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.833999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:00.476 [2024-10-15 04:48:49.834014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:00.476 [2024-10-15 04:48:49.834025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.834052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:00.476 [2024-10-15 04:48:49.838886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.838922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:00.476 [2024-10-15 04:48:49.838935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.847 ms 00:25:00.476 [2024-10-15 04:48:49.838945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.838977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.838999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:00.476 [2024-10-15 04:48:49.839010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:00.476 [2024-10-15 04:48:49.839021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.839079] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:00.476 [2024-10-15 04:48:49.839103] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:00.476 [2024-10-15 04:48:49.839141] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:00.476 [2024-10-15 04:48:49.839159] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:00.476 [2024-10-15 04:48:49.839249] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:00.476 [2024-10-15 04:48:49.839262] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:00.476 [2024-10-15 04:48:49.839276] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:00.476 [2024-10-15 04:48:49.839289] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:00.476 [2024-10-15 04:48:49.839300] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:00.476 [2024-10-15 04:48:49.839316] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:00.476 [2024-10-15 04:48:49.839326] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:00.476 [2024-10-15 04:48:49.839336] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:00.476 [2024-10-15 04:48:49.839346] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:00.476 [2024-10-15 04:48:49.839356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.839367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:00.476 [2024-10-15 04:48:49.839377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:25:00.476 [2024-10-15 04:48:49.839387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.839461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.476 [2024-10-15 04:48:49.839472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:00.476 [2024-10-15 04:48:49.839485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:00.476 [2024-10-15 04:48:49.839495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.476 [2024-10-15 04:48:49.839590] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:00.476 [2024-10-15 04:48:49.839610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:00.476 [2024-10-15 04:48:49.839621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.477 [2024-10-15 04:48:49.839632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.477 [2024-10-15 04:48:49.839642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:00.477 [2024-10-15 04:48:49.839652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:00.477 [2024-10-15 04:48:49.839662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:00.477 [2024-10-15 04:48:49.839672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:00.477 [2024-10-15 04:48:49.839681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:00.477 [2024-10-15 04:48:49.839691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.477 [2024-10-15 04:48:49.839700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:00.477 [2024-10-15 04:48:49.839719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:00.477 [2024-10-15 04:48:49.839728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.477 [2024-10-15 04:48:49.839738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:00.477 [2024-10-15 04:48:49.839748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:00.477 [2024-10-15 04:48:49.839757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.477 [2024-10-15 04:48:49.839766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:00.477 [2024-10-15 04:48:49.839776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:00.477 [2024-10-15 04:48:49.839785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.477 [2024-10-15 04:48:49.839795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:00.477 [2024-10-15 04:48:49.839804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:00.477 [2024-10-15 04:48:49.839832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.477 [2024-10-15 04:48:49.839844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:00.477 [2024-10-15 04:48:49.839854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:00.477 [2024-10-15 04:48:49.839863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.477 [2024-10-15 04:48:49.839873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:00.477 [2024-10-15 04:48:49.839882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:00.477 [2024-10-15 04:48:49.839892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.477 [2024-10-15 04:48:49.839901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:00.477 [2024-10-15 04:48:49.839911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:00.477 [2024-10-15 04:48:49.839920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.477 [2024-10-15 04:48:49.839947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:00.477 [2024-10-15 04:48:49.839957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:00.477 [2024-10-15 04:48:49.839966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.477 [2024-10-15 04:48:49.839975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:00.477 [2024-10-15 04:48:49.839985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:00.477 [2024-10-15 04:48:49.839994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.477 [2024-10-15 04:48:49.840005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:00.477 [2024-10-15 04:48:49.840014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:00.477 [2024-10-15 04:48:49.840024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.477 [2024-10-15 04:48:49.840033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:00.477 [2024-10-15 04:48:49.840042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:00.477 [2024-10-15 04:48:49.840052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.477 [2024-10-15 04:48:49.840061] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:00.477 [2024-10-15 04:48:49.840071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:00.477 [2024-10-15 04:48:49.840081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.477 [2024-10-15 04:48:49.840090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.477 [2024-10-15 04:48:49.840107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:00.477 [2024-10-15 04:48:49.840116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:00.477 [2024-10-15 04:48:49.840126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:00.477 [2024-10-15 04:48:49.840135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:00.477 [2024-10-15 04:48:49.840144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:00.477 [2024-10-15 04:48:49.840153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:00.477 [2024-10-15 04:48:49.840165] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:00.477 [2024-10-15 04:48:49.840178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.477 [2024-10-15 04:48:49.840190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:00.477 [2024-10-15 04:48:49.840201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:00.477 [2024-10-15 04:48:49.840212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:00.477 [2024-10-15 04:48:49.840223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:00.477 [2024-10-15 04:48:49.840233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:00.477 [2024-10-15 04:48:49.840243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:00.477 [2024-10-15 04:48:49.840253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:00.477 [2024-10-15 04:48:49.840264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:00.477 [2024-10-15 04:48:49.840274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:00.477 [2024-10-15 04:48:49.840284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:00.477 [2024-10-15 04:48:49.840294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:00.477 [2024-10-15 04:48:49.840304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:00.477 [2024-10-15 04:48:49.840314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:00.477 [2024-10-15 04:48:49.840324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:00.477 [2024-10-15 04:48:49.840335] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:00.477 [2024-10-15 04:48:49.840346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.477 [2024-10-15 04:48:49.840357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:00.477 [2024-10-15 04:48:49.840367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:00.477 [2024-10-15 04:48:49.840378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:00.477 [2024-10-15 04:48:49.840388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:00.477 [2024-10-15 04:48:49.840398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.477 [2024-10-15 04:48:49.840408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:00.477 [2024-10-15 04:48:49.840419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:25:00.477 [2024-10-15 04:48:49.840428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.477 [2024-10-15 04:48:49.880290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.477 [2024-10-15 04:48:49.880333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:00.477 [2024-10-15 04:48:49.880349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.878 ms 00:25:00.477 [2024-10-15 04:48:49.880360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.477 [2024-10-15 04:48:49.880442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.477 [2024-10-15 04:48:49.880454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:00.477 [2024-10-15 04:48:49.880468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:00.477 [2024-10-15 04:48:49.880478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.477 [2024-10-15 04:48:49.942701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.477 [2024-10-15 04:48:49.942761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.477 [2024-10-15 04:48:49.942777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.253 ms 00:25:00.477 [2024-10-15 04:48:49.942789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.477 [2024-10-15 04:48:49.942870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.477 [2024-10-15 04:48:49.942882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.477 [2024-10-15 04:48:49.942894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:00.477 [2024-10-15 04:48:49.942904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.477 [2024-10-15 04:48:49.943407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.477 [2024-10-15 04:48:49.943430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.477 [2024-10-15 04:48:49.943442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:25:00.477 [2024-10-15 04:48:49.943452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.477 [2024-10-15 04:48:49.943579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.477 [2024-10-15 04:48:49.943599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.477 [2024-10-15 04:48:49.943609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:25:00.477 [2024-10-15 04:48:49.943620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.477 [2024-10-15 04:48:49.963016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.478 [2024-10-15 04:48:49.963062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.478 [2024-10-15 04:48:49.963077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.404 ms 00:25:00.478 [2024-10-15 04:48:49.963088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:49.982544] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:00.737 [2024-10-15 04:48:49.982718] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:00.737 [2024-10-15 04:48:49.982811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:49.982856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:00.737 [2024-10-15 04:48:49.982890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.622 ms 00:25:00.737 [2024-10-15 04:48:49.982919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.013168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.013340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:00.737 [2024-10-15 04:48:50.013438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.215 ms 00:25:00.737 [2024-10-15 04:48:50.013475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.031883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.032060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:00.737 [2024-10-15 04:48:50.032139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.322 ms 00:25:00.737 [2024-10-15 04:48:50.032175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.051196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.051366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:00.737 [2024-10-15 04:48:50.051440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.944 ms 00:25:00.737 [2024-10-15 04:48:50.051475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.052322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.052449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:00.737 [2024-10-15 04:48:50.052526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:25:00.737 [2024-10-15 04:48:50.052562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.139242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.139472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:00.737 [2024-10-15 04:48:50.139552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.760 ms 00:25:00.737 [2024-10-15 04:48:50.139589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.151495] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:00.737 [2024-10-15 04:48:50.154849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.154989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:00.737 [2024-10-15 04:48:50.155111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.182 ms 00:25:00.737 [2024-10-15 04:48:50.155151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.155284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.155372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:00.737 [2024-10-15 04:48:50.155410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:00.737 [2024-10-15 04:48:50.155440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.155626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.155731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:00.737 [2024-10-15 04:48:50.155832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:00.737 [2024-10-15 04:48:50.155871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.155931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.156051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:00.737 [2024-10-15 04:48:50.156101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:00.737 [2024-10-15 04:48:50.156131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.156189] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:00.737 [2024-10-15 04:48:50.156223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.156254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:00.737 [2024-10-15 04:48:50.156284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:00.737 [2024-10-15 04:48:50.156383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.192974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.193152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:00.737 [2024-10-15 04:48:50.193267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.571 ms 00:25:00.737 [2024-10-15 04:48:50.193305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.193419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.737 [2024-10-15 04:48:50.193535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:00.737 [2024-10-15 04:48:50.193612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:00.737 [2024-10-15 04:48:50.193625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.737 [2024-10-15 04:48:50.194928] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.262 ms, result 0 00:25:02.116  [2024-10-15T04:48:52.555Z] Copying: 24/1024 [MB] (24 MBps) [2024-10-15T04:48:53.490Z] Copying: 48/1024 [MB] (23 MBps) [2024-10-15T04:48:54.456Z] Copying: 71/1024 [MB] (23 MBps) [2024-10-15T04:48:55.394Z] Copying: 95/1024 [MB] (23 MBps) [2024-10-15T04:48:56.331Z] Copying: 119/1024 [MB] (24 MBps) [2024-10-15T04:48:57.267Z] Copying: 143/1024 [MB] (24 MBps) [2024-10-15T04:48:58.203Z] Copying: 166/1024 [MB] (23 MBps) [2024-10-15T04:48:59.582Z] Copying: 190/1024 [MB] (23 MBps) [2024-10-15T04:49:00.518Z] Copying: 213/1024 [MB] (23 MBps) [2024-10-15T04:49:01.456Z] Copying: 238/1024 [MB] (24 MBps) [2024-10-15T04:49:02.393Z] Copying: 262/1024 [MB] (24 MBps) [2024-10-15T04:49:03.364Z] Copying: 285/1024 [MB] (23 MBps) [2024-10-15T04:49:04.301Z] Copying: 309/1024 [MB] (23 MBps) [2024-10-15T04:49:05.237Z] Copying: 332/1024 [MB] (23 MBps) [2024-10-15T04:49:06.616Z] Copying: 355/1024 [MB] (22 MBps) [2024-10-15T04:49:07.183Z] Copying: 378/1024 [MB] (23 MBps) [2024-10-15T04:49:08.560Z] Copying: 402/1024 [MB] (23 MBps) [2024-10-15T04:49:09.514Z] Copying: 424/1024 [MB] (22 MBps) [2024-10-15T04:49:10.458Z] Copying: 448/1024 [MB] (23 MBps) [2024-10-15T04:49:11.395Z] Copying: 471/1024 [MB] (23 MBps) [2024-10-15T04:49:12.333Z] Copying: 485/1024 [MB] (13 MBps) [2024-10-15T04:49:13.270Z] Copying: 508/1024 [MB] (23 MBps) [2024-10-15T04:49:14.208Z] Copying: 533/1024 [MB] (25 MBps) [2024-10-15T04:49:15.587Z] Copying: 558/1024 [MB] (24 MBps) [2024-10-15T04:49:16.525Z] Copying: 583/1024 [MB] (25 MBps) [2024-10-15T04:49:17.462Z] Copying: 608/1024 [MB] (25 MBps) [2024-10-15T04:49:18.399Z] Copying: 634/1024 [MB] (25 MBps) [2024-10-15T04:49:19.349Z] Copying: 660/1024 [MB] (25 MBps) [2024-10-15T04:49:20.287Z] Copying: 684/1024 [MB] (24 MBps) [2024-10-15T04:49:21.224Z] Copying: 710/1024 [MB] (25 MBps) [2024-10-15T04:49:22.161Z] Copying: 736/1024 [MB] (25 MBps) [2024-10-15T04:49:23.539Z] Copying: 760/1024 [MB] (24 MBps) [2024-10-15T04:49:24.476Z] Copying: 785/1024 [MB] (24 MBps) [2024-10-15T04:49:25.412Z] Copying: 809/1024 [MB] (24 MBps) [2024-10-15T04:49:26.368Z] Copying: 833/1024 [MB] (23 MBps) [2024-10-15T04:49:27.305Z] Copying: 856/1024 [MB] (23 MBps) [2024-10-15T04:49:28.240Z] Copying: 881/1024 [MB] (24 MBps) [2024-10-15T04:49:29.175Z] Copying: 905/1024 [MB] (24 MBps) [2024-10-15T04:49:30.553Z] Copying: 930/1024 [MB] (25 MBps) [2024-10-15T04:49:31.490Z] Copying: 955/1024 [MB] (24 MBps) [2024-10-15T04:49:32.486Z] Copying: 979/1024 [MB] (23 MBps) [2024-10-15T04:49:33.423Z] Copying: 1003/1024 [MB] (23 MBps) [2024-10-15T04:49:33.990Z] Copying: 1023/1024 [MB] (20 MBps) [2024-10-15T04:49:33.990Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-10-15 04:49:33.741608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-10-15 04:49:33.741697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:44.486 [2024-10-15 04:49:33.741718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:44.486 [2024-10-15 04:49:33.741730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-10-15 04:49:33.745422] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:44.486 [2024-10-15 04:49:33.751213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-10-15 04:49:33.751254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:44.486 [2024-10-15 04:49:33.751269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.745 ms 00:25:44.486 [2024-10-15 04:49:33.751280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-10-15 04:49:33.760436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-10-15 04:49:33.760507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:44.486 [2024-10-15 04:49:33.760524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.259 ms 00:25:44.486 [2024-10-15 04:49:33.760535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-10-15 04:49:33.784671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-10-15 04:49:33.784726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:44.486 [2024-10-15 04:49:33.784742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.152 ms 00:25:44.486 [2024-10-15 04:49:33.784755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-10-15 04:49:33.789753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-10-15 04:49:33.789795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:44.486 [2024-10-15 04:49:33.789830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.967 ms 00:25:44.486 [2024-10-15 04:49:33.789841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-10-15 04:49:33.834024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-10-15 04:49:33.834109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:44.486 [2024-10-15 04:49:33.834129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.164 ms 00:25:44.486 [2024-10-15 04:49:33.834140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-10-15 04:49:33.860307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-10-15 04:49:33.860424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:44.486 [2024-10-15 04:49:33.860444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.123 ms 00:25:44.486 [2024-10-15 04:49:33.860457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.486 [2024-10-15 04:49:33.983854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.486 [2024-10-15 04:49:33.984198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:44.486 [2024-10-15 04:49:33.984232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 123.484 ms 00:25:44.486 [2024-10-15 04:49:33.984264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.747 [2024-10-15 04:49:34.024941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.747 [2024-10-15 04:49:34.025002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:44.747 [2024-10-15 04:49:34.025022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.706 ms 00:25:44.747 [2024-10-15 04:49:34.025034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.747 [2024-10-15 04:49:34.062222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.747 [2024-10-15 04:49:34.062271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:44.747 [2024-10-15 04:49:34.062287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.199 ms 00:25:44.747 [2024-10-15 04:49:34.062298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.747 [2024-10-15 04:49:34.102374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.747 [2024-10-15 04:49:34.102678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:44.747 [2024-10-15 04:49:34.102712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.090 ms 00:25:44.747 [2024-10-15 04:49:34.102724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.747 [2024-10-15 04:49:34.147300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.747 [2024-10-15 04:49:34.147394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:44.747 [2024-10-15 04:49:34.147415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.469 ms 00:25:44.747 [2024-10-15 04:49:34.147428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.747 [2024-10-15 04:49:34.147517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:44.747 [2024-10-15 04:49:34.147541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108032 / 261120 wr_cnt: 1 state: open 00:25:44.747 [2024-10-15 04:49:34.147556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.147848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.148073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.148084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.148096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:44.747 [2024-10-15 04:49:34.148107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:44.748 [2024-10-15 04:49:34.148952] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:44.748 [2024-10-15 04:49:34.148963] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f313f300-eab4-4dd8-837e-ccea041cb153 00:25:44.748 [2024-10-15 04:49:34.148975] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108032 00:25:44.748 [2024-10-15 04:49:34.148985] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108992 00:25:44.748 [2024-10-15 04:49:34.149027] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108032 00:25:44.748 [2024-10-15 04:49:34.149039] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:25:44.748 [2024-10-15 04:49:34.149050] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:44.748 [2024-10-15 04:49:34.149062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:44.748 [2024-10-15 04:49:34.149073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:44.748 [2024-10-15 04:49:34.149082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:44.748 [2024-10-15 04:49:34.149093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:44.748 [2024-10-15 04:49:34.149104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.748 [2024-10-15 04:49:34.149118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:44.748 [2024-10-15 04:49:34.149130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.591 ms 00:25:44.748 [2024-10-15 04:49:34.149142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.748 [2024-10-15 04:49:34.171749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.748 [2024-10-15 04:49:34.171849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:44.748 [2024-10-15 04:49:34.171868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.568 ms 00:25:44.748 [2024-10-15 04:49:34.171880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.748 [2024-10-15 04:49:34.172485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.748 [2024-10-15 04:49:34.172509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:44.748 [2024-10-15 04:49:34.172522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:25:44.748 [2024-10-15 04:49:34.172533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.748 [2024-10-15 04:49:34.228899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.748 [2024-10-15 04:49:34.228982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:44.748 [2024-10-15 04:49:34.229001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.748 [2024-10-15 04:49:34.229014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.748 [2024-10-15 04:49:34.229114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.748 [2024-10-15 04:49:34.229127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:44.748 [2024-10-15 04:49:34.229139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.748 [2024-10-15 04:49:34.229150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.748 [2024-10-15 04:49:34.229265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.748 [2024-10-15 04:49:34.229281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:44.749 [2024-10-15 04:49:34.229293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.749 [2024-10-15 04:49:34.229305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.749 [2024-10-15 04:49:34.229326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.749 [2024-10-15 04:49:34.229348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:44.749 [2024-10-15 04:49:34.229360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.749 [2024-10-15 04:49:34.229371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.008 [2024-10-15 04:49:34.364761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.008 [2024-10-15 04:49:34.365091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.008 [2024-10-15 04:49:34.365121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.008 [2024-10-15 04:49:34.365134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.008 [2024-10-15 04:49:34.475332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.008 [2024-10-15 04:49:34.475408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.008 [2024-10-15 04:49:34.475428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.008 [2024-10-15 04:49:34.475439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.008 [2024-10-15 04:49:34.475572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.008 [2024-10-15 04:49:34.475591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.008 [2024-10-15 04:49:34.475603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.008 [2024-10-15 04:49:34.475614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.008 [2024-10-15 04:49:34.475665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.008 [2024-10-15 04:49:34.475678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.008 [2024-10-15 04:49:34.475689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.008 [2024-10-15 04:49:34.475700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.008 [2024-10-15 04:49:34.475859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.008 [2024-10-15 04:49:34.475876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.008 [2024-10-15 04:49:34.475893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.008 [2024-10-15 04:49:34.475905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.008 [2024-10-15 04:49:34.475969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.008 [2024-10-15 04:49:34.475983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:45.008 [2024-10-15 04:49:34.475995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.008 [2024-10-15 04:49:34.476006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.008 [2024-10-15 04:49:34.476053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.008 [2024-10-15 04:49:34.476066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.008 [2024-10-15 04:49:34.476082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.008 [2024-10-15 04:49:34.476094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.008 [2024-10-15 04:49:34.476148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.008 [2024-10-15 04:49:34.476162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.008 [2024-10-15 04:49:34.476173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.008 [2024-10-15 04:49:34.476184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.008 [2024-10-15 04:49:34.476334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 737.151 ms, result 0 00:25:46.912 00:25:46.912 00:25:46.912 04:49:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:48.290 04:49:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:48.556 [2024-10-15 04:49:37.806383] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:25:48.556 [2024-10-15 04:49:37.806509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79962 ] 00:25:48.556 [2024-10-15 04:49:37.979799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.817 [2024-10-15 04:49:38.120021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.075 [2024-10-15 04:49:38.555946] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.075 [2024-10-15 04:49:38.556032] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.336 [2024-10-15 04:49:38.722506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.336 [2024-10-15 04:49:38.722574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:49.336 [2024-10-15 04:49:38.722592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:49.336 [2024-10-15 04:49:38.722611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.336 [2024-10-15 04:49:38.722669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.336 [2024-10-15 04:49:38.722683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:49.336 [2024-10-15 04:49:38.722693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:49.336 [2024-10-15 04:49:38.722708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.336 [2024-10-15 04:49:38.722732] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:49.336 [2024-10-15 04:49:38.723802] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:49.336 [2024-10-15 04:49:38.723853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.336 [2024-10-15 04:49:38.723864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:49.336 [2024-10-15 04:49:38.723877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:25:49.336 [2024-10-15 04:49:38.723889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.336 [2024-10-15 04:49:38.726233] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:49.336 [2024-10-15 04:49:38.747298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.336 [2024-10-15 04:49:38.747340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:49.336 [2024-10-15 04:49:38.747358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.099 ms 00:25:49.336 [2024-10-15 04:49:38.747370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.336 [2024-10-15 04:49:38.747441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.336 [2024-10-15 04:49:38.747457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:49.336 [2024-10-15 04:49:38.747470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:49.336 [2024-10-15 04:49:38.747481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.336 [2024-10-15 04:49:38.759451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.337 [2024-10-15 04:49:38.759481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:49.337 [2024-10-15 04:49:38.759496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.912 ms 00:25:49.337 [2024-10-15 04:49:38.759508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.337 [2024-10-15 04:49:38.759604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.337 [2024-10-15 04:49:38.759619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:49.337 [2024-10-15 04:49:38.759631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:49.337 [2024-10-15 04:49:38.759642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.337 [2024-10-15 04:49:38.759705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.337 [2024-10-15 04:49:38.759718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:49.337 [2024-10-15 04:49:38.759730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:49.337 [2024-10-15 04:49:38.759741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.337 [2024-10-15 04:49:38.759771] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:49.337 [2024-10-15 04:49:38.765583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.337 [2024-10-15 04:49:38.765800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:49.337 [2024-10-15 04:49:38.765834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.830 ms 00:25:49.337 [2024-10-15 04:49:38.765847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.337 [2024-10-15 04:49:38.765893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.337 [2024-10-15 04:49:38.765905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:49.337 [2024-10-15 04:49:38.765917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:49.337 [2024-10-15 04:49:38.765928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.337 [2024-10-15 04:49:38.765970] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:49.337 [2024-10-15 04:49:38.765998] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:49.337 [2024-10-15 04:49:38.766047] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:49.337 [2024-10-15 04:49:38.766071] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:49.337 [2024-10-15 04:49:38.766166] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:49.337 [2024-10-15 04:49:38.766182] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:49.337 [2024-10-15 04:49:38.766196] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:49.337 [2024-10-15 04:49:38.766210] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:49.337 [2024-10-15 04:49:38.766223] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:49.337 [2024-10-15 04:49:38.766236] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:49.337 [2024-10-15 04:49:38.766248] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:49.337 [2024-10-15 04:49:38.766258] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:49.337 [2024-10-15 04:49:38.766269] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:49.337 [2024-10-15 04:49:38.766281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.337 [2024-10-15 04:49:38.766296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:49.337 [2024-10-15 04:49:38.766308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:25:49.337 [2024-10-15 04:49:38.766319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.337 [2024-10-15 04:49:38.766394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.337 [2024-10-15 04:49:38.766406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:49.337 [2024-10-15 04:49:38.766418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:49.337 [2024-10-15 04:49:38.766429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.337 [2024-10-15 04:49:38.766530] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:49.337 [2024-10-15 04:49:38.766545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:49.337 [2024-10-15 04:49:38.766561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.337 [2024-10-15 04:49:38.766573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:49.337 [2024-10-15 04:49:38.766594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:49.337 [2024-10-15 04:49:38.766615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:49.337 [2024-10-15 04:49:38.766627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.337 [2024-10-15 04:49:38.766647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:49.337 [2024-10-15 04:49:38.766658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:49.337 [2024-10-15 04:49:38.766668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.337 [2024-10-15 04:49:38.766678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:49.337 [2024-10-15 04:49:38.766687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:49.337 [2024-10-15 04:49:38.766708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:49.337 [2024-10-15 04:49:38.766727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:49.337 [2024-10-15 04:49:38.766737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:49.337 [2024-10-15 04:49:38.766757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.337 [2024-10-15 04:49:38.766777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:49.337 [2024-10-15 04:49:38.766787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.337 [2024-10-15 04:49:38.766805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:49.337 [2024-10-15 04:49:38.766832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.337 [2024-10-15 04:49:38.766852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:49.337 [2024-10-15 04:49:38.766861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.337 [2024-10-15 04:49:38.766881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:49.337 [2024-10-15 04:49:38.766890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.337 [2024-10-15 04:49:38.766910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:49.337 [2024-10-15 04:49:38.766921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:49.337 [2024-10-15 04:49:38.766930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.337 [2024-10-15 04:49:38.766941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:49.337 [2024-10-15 04:49:38.766950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:49.337 [2024-10-15 04:49:38.766959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:49.337 [2024-10-15 04:49:38.766978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:49.337 [2024-10-15 04:49:38.766987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.337 [2024-10-15 04:49:38.766999] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:49.337 [2024-10-15 04:49:38.767009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:49.337 [2024-10-15 04:49:38.767020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.337 [2024-10-15 04:49:38.767031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.337 [2024-10-15 04:49:38.767041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:49.337 [2024-10-15 04:49:38.767051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:49.337 [2024-10-15 04:49:38.767061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:49.337 [2024-10-15 04:49:38.767070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:49.337 [2024-10-15 04:49:38.767081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:49.337 [2024-10-15 04:49:38.767091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:49.337 [2024-10-15 04:49:38.767103] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:49.337 [2024-10-15 04:49:38.767119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.337 [2024-10-15 04:49:38.767132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:49.337 [2024-10-15 04:49:38.767143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:49.337 [2024-10-15 04:49:38.767154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:49.337 [2024-10-15 04:49:38.767165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:49.337 [2024-10-15 04:49:38.767176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:49.337 [2024-10-15 04:49:38.767187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:49.337 [2024-10-15 04:49:38.767198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:49.337 [2024-10-15 04:49:38.767210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:49.337 [2024-10-15 04:49:38.767221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:49.337 [2024-10-15 04:49:38.767232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:49.337 [2024-10-15 04:49:38.767243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:49.338 [2024-10-15 04:49:38.767254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:49.338 [2024-10-15 04:49:38.767265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:49.338 [2024-10-15 04:49:38.767276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:49.338 [2024-10-15 04:49:38.767287] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:49.338 [2024-10-15 04:49:38.767298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.338 [2024-10-15 04:49:38.767314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:49.338 [2024-10-15 04:49:38.767324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:49.338 [2024-10-15 04:49:38.767335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:49.338 [2024-10-15 04:49:38.767346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:49.338 [2024-10-15 04:49:38.767359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.338 [2024-10-15 04:49:38.767371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:49.338 [2024-10-15 04:49:38.767382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:25:49.338 [2024-10-15 04:49:38.767393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.338 [2024-10-15 04:49:38.817673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.338 [2024-10-15 04:49:38.817721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:49.338 [2024-10-15 04:49:38.817737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.306 ms 00:25:49.338 [2024-10-15 04:49:38.817749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.338 [2024-10-15 04:49:38.817868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.338 [2024-10-15 04:49:38.817886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:49.338 [2024-10-15 04:49:38.817899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:25:49.338 [2024-10-15 04:49:38.817910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.598 [2024-10-15 04:49:38.885082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.598 [2024-10-15 04:49:38.885291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:49.598 [2024-10-15 04:49:38.885318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.174 ms 00:25:49.598 [2024-10-15 04:49:38.885331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.598 [2024-10-15 04:49:38.885403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.598 [2024-10-15 04:49:38.885416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:49.598 [2024-10-15 04:49:38.885428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:49.598 [2024-10-15 04:49:38.885447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.598 [2024-10-15 04:49:38.886253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.598 [2024-10-15 04:49:38.886276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:49.598 [2024-10-15 04:49:38.886289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:25:49.598 [2024-10-15 04:49:38.886300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.598 [2024-10-15 04:49:38.886445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.598 [2024-10-15 04:49:38.886467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:49.598 [2024-10-15 04:49:38.886479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:25:49.598 [2024-10-15 04:49:38.886490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.598 [2024-10-15 04:49:38.907718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.598 [2024-10-15 04:49:38.907759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:49.598 [2024-10-15 04:49:38.907775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.231 ms 00:25:49.598 [2024-10-15 04:49:38.907791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.598 [2024-10-15 04:49:38.927674] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:49.598 [2024-10-15 04:49:38.927896] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:49.598 [2024-10-15 04:49:38.927920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.598 [2024-10-15 04:49:38.927933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:49.598 [2024-10-15 04:49:38.927946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.981 ms 00:25:49.598 [2024-10-15 04:49:38.927957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.598 [2024-10-15 04:49:38.958976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.598 [2024-10-15 04:49:38.959145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:49.598 [2024-10-15 04:49:38.959168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.000 ms 00:25:49.598 [2024-10-15 04:49:38.959180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.598 [2024-10-15 04:49:38.978129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.599 [2024-10-15 04:49:38.978290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:49.599 [2024-10-15 04:49:38.978310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.873 ms 00:25:49.599 [2024-10-15 04:49:38.978321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.599 [2024-10-15 04:49:38.996869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.599 [2024-10-15 04:49:38.997014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:49.599 [2024-10-15 04:49:38.997035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.515 ms 00:25:49.599 [2024-10-15 04:49:38.997046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.599 [2024-10-15 04:49:38.997989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.599 [2024-10-15 04:49:38.998024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:49.599 [2024-10-15 04:49:38.998039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:25:49.599 [2024-10-15 04:49:38.998050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.599 [2024-10-15 04:49:39.097165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.599 [2024-10-15 04:49:39.097270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:49.599 [2024-10-15 04:49:39.097291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.244 ms 00:25:49.599 [2024-10-15 04:49:39.097312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.858 [2024-10-15 04:49:39.109373] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:49.858 [2024-10-15 04:49:39.114281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.858 [2024-10-15 04:49:39.114314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:49.858 [2024-10-15 04:49:39.114331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.911 ms 00:25:49.858 [2024-10-15 04:49:39.114343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.858 [2024-10-15 04:49:39.114478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.858 [2024-10-15 04:49:39.114494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:49.858 [2024-10-15 04:49:39.114506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:49.858 [2024-10-15 04:49:39.114517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.858 [2024-10-15 04:49:39.116653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.858 [2024-10-15 04:49:39.116694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:49.858 [2024-10-15 04:49:39.116708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:25:49.858 [2024-10-15 04:49:39.116720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.858 [2024-10-15 04:49:39.116784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.858 [2024-10-15 04:49:39.116796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:49.858 [2024-10-15 04:49:39.116808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:49.858 [2024-10-15 04:49:39.116836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.858 [2024-10-15 04:49:39.116882] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:49.858 [2024-10-15 04:49:39.116896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.858 [2024-10-15 04:49:39.116911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:49.858 [2024-10-15 04:49:39.116923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:49.858 [2024-10-15 04:49:39.116934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.858 [2024-10-15 04:49:39.155142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.858 [2024-10-15 04:49:39.155191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:49.858 [2024-10-15 04:49:39.155208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.245 ms 00:25:49.858 [2024-10-15 04:49:39.155220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.858 [2024-10-15 04:49:39.155323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.858 [2024-10-15 04:49:39.155337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:49.858 [2024-10-15 04:49:39.155350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:49.858 [2024-10-15 04:49:39.155361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.858 [2024-10-15 04:49:39.157004] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 434.579 ms, result 0 00:25:51.237  [2024-10-15T04:49:41.679Z] Copying: 1236/1048576 [kB] (1236 kBps) [2024-10-15T04:49:42.617Z] Copying: 9024/1048576 [kB] (7788 kBps) [2024-10-15T04:49:43.554Z] Copying: 40/1024 [MB] (31 MBps) [2024-10-15T04:49:44.491Z] Copying: 72/1024 [MB] (31 MBps) [2024-10-15T04:49:45.427Z] Copying: 105/1024 [MB] (32 MBps) [2024-10-15T04:49:46.803Z] Copying: 136/1024 [MB] (31 MBps) [2024-10-15T04:49:47.371Z] Copying: 169/1024 [MB] (33 MBps) [2024-10-15T04:49:48.771Z] Copying: 202/1024 [MB] (32 MBps) [2024-10-15T04:49:49.711Z] Copying: 235/1024 [MB] (32 MBps) [2024-10-15T04:49:50.649Z] Copying: 268/1024 [MB] (33 MBps) [2024-10-15T04:49:51.585Z] Copying: 301/1024 [MB] (33 MBps) [2024-10-15T04:49:52.522Z] Copying: 335/1024 [MB] (33 MBps) [2024-10-15T04:49:53.459Z] Copying: 368/1024 [MB] (33 MBps) [2024-10-15T04:49:54.396Z] Copying: 401/1024 [MB] (33 MBps) [2024-10-15T04:49:55.404Z] Copying: 435/1024 [MB] (33 MBps) [2024-10-15T04:49:56.781Z] Copying: 468/1024 [MB] (33 MBps) [2024-10-15T04:49:57.718Z] Copying: 501/1024 [MB] (33 MBps) [2024-10-15T04:49:58.652Z] Copying: 535/1024 [MB] (33 MBps) [2024-10-15T04:49:59.587Z] Copying: 568/1024 [MB] (33 MBps) [2024-10-15T04:50:00.521Z] Copying: 602/1024 [MB] (33 MBps) [2024-10-15T04:50:01.510Z] Copying: 633/1024 [MB] (31 MBps) [2024-10-15T04:50:02.448Z] Copying: 666/1024 [MB] (32 MBps) [2024-10-15T04:50:03.381Z] Copying: 699/1024 [MB] (33 MBps) [2024-10-15T04:50:04.756Z] Copying: 733/1024 [MB] (34 MBps) [2024-10-15T04:50:05.690Z] Copying: 768/1024 [MB] (34 MBps) [2024-10-15T04:50:06.624Z] Copying: 800/1024 [MB] (32 MBps) [2024-10-15T04:50:07.558Z] Copying: 833/1024 [MB] (32 MBps) [2024-10-15T04:50:08.491Z] Copying: 870/1024 [MB] (36 MBps) [2024-10-15T04:50:09.427Z] Copying: 908/1024 [MB] (38 MBps) [2024-10-15T04:50:10.371Z] Copying: 945/1024 [MB] (37 MBps) [2024-10-15T04:50:11.741Z] Copying: 983/1024 [MB] (37 MBps) [2024-10-15T04:50:11.741Z] Copying: 1019/1024 [MB] (35 MBps) [2024-10-15T04:50:13.119Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-10-15 04:50:12.801492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.615 [2024-10-15 04:50:12.801625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:23.615 [2024-10-15 04:50:12.801682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:23.615 [2024-10-15 04:50:12.801714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.615 [2024-10-15 04:50:12.801776] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:23.615 [2024-10-15 04:50:12.812756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.615 [2024-10-15 04:50:12.812859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:23.615 [2024-10-15 04:50:12.812904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.899 ms 00:26:23.615 [2024-10-15 04:50:12.812933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.615 [2024-10-15 04:50:12.813395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.615 [2024-10-15 04:50:12.813437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:23.615 [2024-10-15 04:50:12.813464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:26:23.615 [2024-10-15 04:50:12.813498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.615 [2024-10-15 04:50:12.828591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.615 [2024-10-15 04:50:12.828652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:23.615 [2024-10-15 04:50:12.828672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.077 ms 00:26:23.615 [2024-10-15 04:50:12.828704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.615 [2024-10-15 04:50:12.833807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.615 [2024-10-15 04:50:12.833852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:23.615 [2024-10-15 04:50:12.833866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.074 ms 00:26:23.615 [2024-10-15 04:50:12.833883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.615 [2024-10-15 04:50:12.870441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.615 [2024-10-15 04:50:12.870499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:23.615 [2024-10-15 04:50:12.870514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.571 ms 00:26:23.615 [2024-10-15 04:50:12.870525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.615 [2024-10-15 04:50:12.892360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.615 [2024-10-15 04:50:12.892593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:23.615 [2024-10-15 04:50:12.892618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.822 ms 00:26:23.615 [2024-10-15 04:50:12.892630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.615 [2024-10-15 04:50:12.894854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.615 [2024-10-15 04:50:12.894893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:23.615 [2024-10-15 04:50:12.894907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:26:23.615 [2024-10-15 04:50:12.894918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.615 [2024-10-15 04:50:12.932166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.615 [2024-10-15 04:50:12.932228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:23.615 [2024-10-15 04:50:12.932245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.287 ms 00:26:23.616 [2024-10-15 04:50:12.932256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.616 [2024-10-15 04:50:12.970802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.616 [2024-10-15 04:50:12.971076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:23.616 [2024-10-15 04:50:12.971118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.549 ms 00:26:23.616 [2024-10-15 04:50:12.971129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.616 [2024-10-15 04:50:13.008283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.616 [2024-10-15 04:50:13.008345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:23.616 [2024-10-15 04:50:13.008361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.160 ms 00:26:23.616 [2024-10-15 04:50:13.008372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.616 [2024-10-15 04:50:13.045875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.616 [2024-10-15 04:50:13.046107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:23.616 [2024-10-15 04:50:13.046133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.454 ms 00:26:23.616 [2024-10-15 04:50:13.046144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.616 [2024-10-15 04:50:13.046224] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:23.616 [2024-10-15 04:50:13.046245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:23.616 [2024-10-15 04:50:13.046259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:23.616 [2024-10-15 04:50:13.046270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.046997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:23.616 [2024-10-15 04:50:13.047229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:23.617 [2024-10-15 04:50:13.047407] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:23.617 [2024-10-15 04:50:13.047418] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f313f300-eab4-4dd8-837e-ccea041cb153 00:26:23.617 [2024-10-15 04:50:13.047429] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:23.617 [2024-10-15 04:50:13.047439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 156608 00:26:23.617 [2024-10-15 04:50:13.047450] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 154624 00:26:23.617 [2024-10-15 04:50:13.047461] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0128 00:26:23.617 [2024-10-15 04:50:13.047479] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:23.617 [2024-10-15 04:50:13.047489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:23.617 [2024-10-15 04:50:13.047499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:23.617 [2024-10-15 04:50:13.047519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:23.617 [2024-10-15 04:50:13.047528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:23.617 [2024-10-15 04:50:13.047539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.617 [2024-10-15 04:50:13.047549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:23.617 [2024-10-15 04:50:13.047560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:26:23.617 [2024-10-15 04:50:13.047570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.617 [2024-10-15 04:50:13.067503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.617 [2024-10-15 04:50:13.067560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:23.617 [2024-10-15 04:50:13.067584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.908 ms 00:26:23.617 [2024-10-15 04:50:13.067596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.617 [2024-10-15 04:50:13.068136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.617 [2024-10-15 04:50:13.068152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:23.617 [2024-10-15 04:50:13.068163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:26:23.617 [2024-10-15 04:50:13.068174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.876 [2024-10-15 04:50:13.120145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.876 [2024-10-15 04:50:13.120210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:23.876 [2024-10-15 04:50:13.120226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.876 [2024-10-15 04:50:13.120237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.876 [2024-10-15 04:50:13.120309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.877 [2024-10-15 04:50:13.120321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:23.877 [2024-10-15 04:50:13.120332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.877 [2024-10-15 04:50:13.120342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.877 [2024-10-15 04:50:13.120421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.877 [2024-10-15 04:50:13.120435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:23.877 [2024-10-15 04:50:13.120450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.877 [2024-10-15 04:50:13.120461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.877 [2024-10-15 04:50:13.120478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.877 [2024-10-15 04:50:13.120489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:23.877 [2024-10-15 04:50:13.120499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.877 [2024-10-15 04:50:13.120509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.877 [2024-10-15 04:50:13.245216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.877 [2024-10-15 04:50:13.245296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:23.877 [2024-10-15 04:50:13.245311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.877 [2024-10-15 04:50:13.245322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.877 [2024-10-15 04:50:13.349900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.877 [2024-10-15 04:50:13.349969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:23.877 [2024-10-15 04:50:13.349985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.877 [2024-10-15 04:50:13.349996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.877 [2024-10-15 04:50:13.350087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.877 [2024-10-15 04:50:13.350100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:23.877 [2024-10-15 04:50:13.350111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.877 [2024-10-15 04:50:13.350130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.877 [2024-10-15 04:50:13.350179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.877 [2024-10-15 04:50:13.350192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:23.877 [2024-10-15 04:50:13.350202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.877 [2024-10-15 04:50:13.350212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.877 [2024-10-15 04:50:13.350330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.877 [2024-10-15 04:50:13.350343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:23.877 [2024-10-15 04:50:13.350354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.877 [2024-10-15 04:50:13.350369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.877 [2024-10-15 04:50:13.350405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.877 [2024-10-15 04:50:13.350417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:23.877 [2024-10-15 04:50:13.350428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.877 [2024-10-15 04:50:13.350438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.877 [2024-10-15 04:50:13.350476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.877 [2024-10-15 04:50:13.350488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:23.877 [2024-10-15 04:50:13.350498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.877 [2024-10-15 04:50:13.350508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.877 [2024-10-15 04:50:13.350554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.877 [2024-10-15 04:50:13.350566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:23.877 [2024-10-15 04:50:13.350576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.877 [2024-10-15 04:50:13.350586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.877 [2024-10-15 04:50:13.350702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 550.105 ms, result 0 00:26:25.251 00:26:25.251 00:26:25.251 04:50:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:27.222 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:27.222 04:50:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:27.222 [2024-10-15 04:50:16.506880] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:26:27.222 [2024-10-15 04:50:16.507004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80348 ] 00:26:27.222 [2024-10-15 04:50:16.682556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.480 [2024-10-15 04:50:16.804025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.738 [2024-10-15 04:50:17.179278] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:27.738 [2024-10-15 04:50:17.179353] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:27.999 [2024-10-15 04:50:17.342188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.342270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:27.999 [2024-10-15 04:50:17.342287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:27.999 [2024-10-15 04:50:17.342305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.342366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.342379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:27.999 [2024-10-15 04:50:17.342390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:27.999 [2024-10-15 04:50:17.342404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.342427] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:27.999 [2024-10-15 04:50:17.343464] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:27.999 [2024-10-15 04:50:17.343504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.343516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:27.999 [2024-10-15 04:50:17.343528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:26:27.999 [2024-10-15 04:50:17.343538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.345072] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:27.999 [2024-10-15 04:50:17.364994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.365068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:27.999 [2024-10-15 04:50:17.365088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.952 ms 00:26:27.999 [2024-10-15 04:50:17.365099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.365182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.365199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:27.999 [2024-10-15 04:50:17.365211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:27.999 [2024-10-15 04:50:17.365222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.372800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.372854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:27.999 [2024-10-15 04:50:17.372869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.495 ms 00:26:27.999 [2024-10-15 04:50:17.372880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.372976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.372991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:27.999 [2024-10-15 04:50:17.373002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:27.999 [2024-10-15 04:50:17.373012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.373062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.373074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:27.999 [2024-10-15 04:50:17.373084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:27.999 [2024-10-15 04:50:17.373094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.373121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:27.999 [2024-10-15 04:50:17.378112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.378150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:27.999 [2024-10-15 04:50:17.378163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.005 ms 00:26:27.999 [2024-10-15 04:50:17.378174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.378213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.378224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:27.999 [2024-10-15 04:50:17.378235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:27.999 [2024-10-15 04:50:17.378245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.378307] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:27.999 [2024-10-15 04:50:17.378330] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:27.999 [2024-10-15 04:50:17.378379] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:27.999 [2024-10-15 04:50:17.378401] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:27.999 [2024-10-15 04:50:17.378491] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:27.999 [2024-10-15 04:50:17.378504] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:27.999 [2024-10-15 04:50:17.378517] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:27.999 [2024-10-15 04:50:17.378530] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:27.999 [2024-10-15 04:50:17.378542] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:27.999 [2024-10-15 04:50:17.378554] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:27.999 [2024-10-15 04:50:17.378564] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:27.999 [2024-10-15 04:50:17.378574] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:27.999 [2024-10-15 04:50:17.378583] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:27.999 [2024-10-15 04:50:17.378594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.378607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:27.999 [2024-10-15 04:50:17.378618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:26:27.999 [2024-10-15 04:50:17.378628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.378703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.999 [2024-10-15 04:50:17.378714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:27.999 [2024-10-15 04:50:17.378724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:27.999 [2024-10-15 04:50:17.378734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.999 [2024-10-15 04:50:17.378854] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:27.999 [2024-10-15 04:50:17.378873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:27.999 [2024-10-15 04:50:17.378888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:27.999 [2024-10-15 04:50:17.378899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.999 [2024-10-15 04:50:17.378910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:27.999 [2024-10-15 04:50:17.378919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:27.999 [2024-10-15 04:50:17.378929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:27.999 [2024-10-15 04:50:17.378939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:27.999 [2024-10-15 04:50:17.378949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:27.999 [2024-10-15 04:50:17.378958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.999 [2024-10-15 04:50:17.378967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:27.999 [2024-10-15 04:50:17.378976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:27.999 [2024-10-15 04:50:17.378987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.999 [2024-10-15 04:50:17.378996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:27.999 [2024-10-15 04:50:17.379005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:27.999 [2024-10-15 04:50:17.379024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.000 [2024-10-15 04:50:17.379034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:28.000 [2024-10-15 04:50:17.379043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:28.000 [2024-10-15 04:50:17.379052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.000 [2024-10-15 04:50:17.379061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:28.000 [2024-10-15 04:50:17.379071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:28.000 [2024-10-15 04:50:17.379080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.000 [2024-10-15 04:50:17.379089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:28.000 [2024-10-15 04:50:17.379098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:28.000 [2024-10-15 04:50:17.379108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.000 [2024-10-15 04:50:17.379117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:28.000 [2024-10-15 04:50:17.379126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:28.000 [2024-10-15 04:50:17.379135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.000 [2024-10-15 04:50:17.379144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:28.000 [2024-10-15 04:50:17.379153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:28.000 [2024-10-15 04:50:17.379162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.000 [2024-10-15 04:50:17.379171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:28.000 [2024-10-15 04:50:17.379180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:28.000 [2024-10-15 04:50:17.379189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:28.000 [2024-10-15 04:50:17.379198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:28.000 [2024-10-15 04:50:17.379207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:28.000 [2024-10-15 04:50:17.379216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:28.000 [2024-10-15 04:50:17.379224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:28.000 [2024-10-15 04:50:17.379233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:28.000 [2024-10-15 04:50:17.379243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.000 [2024-10-15 04:50:17.379252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:28.000 [2024-10-15 04:50:17.379261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:28.000 [2024-10-15 04:50:17.379271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.000 [2024-10-15 04:50:17.379280] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:28.000 [2024-10-15 04:50:17.379298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:28.000 [2024-10-15 04:50:17.379308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:28.000 [2024-10-15 04:50:17.379318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.000 [2024-10-15 04:50:17.379328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:28.000 [2024-10-15 04:50:17.379338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:28.000 [2024-10-15 04:50:17.379347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:28.000 [2024-10-15 04:50:17.379356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:28.000 [2024-10-15 04:50:17.379365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:28.000 [2024-10-15 04:50:17.379374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:28.000 [2024-10-15 04:50:17.379385] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:28.000 [2024-10-15 04:50:17.379398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:28.000 [2024-10-15 04:50:17.379409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:28.000 [2024-10-15 04:50:17.379420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:28.000 [2024-10-15 04:50:17.379430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:28.000 [2024-10-15 04:50:17.379441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:28.000 [2024-10-15 04:50:17.379451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:28.000 [2024-10-15 04:50:17.379461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:28.000 [2024-10-15 04:50:17.379471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:28.000 [2024-10-15 04:50:17.379483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:28.000 [2024-10-15 04:50:17.379493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:28.000 [2024-10-15 04:50:17.379503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:28.000 [2024-10-15 04:50:17.379513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:28.000 [2024-10-15 04:50:17.379523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:28.000 [2024-10-15 04:50:17.379533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:28.000 [2024-10-15 04:50:17.379544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:28.000 [2024-10-15 04:50:17.379554] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:28.000 [2024-10-15 04:50:17.379565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:28.000 [2024-10-15 04:50:17.379580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:28.000 [2024-10-15 04:50:17.379591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:28.000 [2024-10-15 04:50:17.379602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:28.000 [2024-10-15 04:50:17.379612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:28.000 [2024-10-15 04:50:17.379623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.000 [2024-10-15 04:50:17.379634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:28.000 [2024-10-15 04:50:17.379645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:26:28.000 [2024-10-15 04:50:17.379655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.000 [2024-10-15 04:50:17.420767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.000 [2024-10-15 04:50:17.420843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:28.000 [2024-10-15 04:50:17.420872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.124 ms 00:26:28.000 [2024-10-15 04:50:17.420892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.000 [2024-10-15 04:50:17.421007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.000 [2024-10-15 04:50:17.421022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:28.000 [2024-10-15 04:50:17.421037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:28.000 [2024-10-15 04:50:17.421063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.000 [2024-10-15 04:50:17.486905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.000 [2024-10-15 04:50:17.486961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:28.000 [2024-10-15 04:50:17.486977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.862 ms 00:26:28.000 [2024-10-15 04:50:17.486988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.000 [2024-10-15 04:50:17.487050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.000 [2024-10-15 04:50:17.487061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:28.000 [2024-10-15 04:50:17.487074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:28.000 [2024-10-15 04:50:17.487088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.000 [2024-10-15 04:50:17.487596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.000 [2024-10-15 04:50:17.487627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:28.000 [2024-10-15 04:50:17.487639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:26:28.000 [2024-10-15 04:50:17.487650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.000 [2024-10-15 04:50:17.487771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.000 [2024-10-15 04:50:17.487792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:28.000 [2024-10-15 04:50:17.487803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:26:28.000 [2024-10-15 04:50:17.487844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.508366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.508423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:28.260 [2024-10-15 04:50:17.508443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.516 ms 00:26:28.260 [2024-10-15 04:50:17.508454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.529255] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:28.260 [2024-10-15 04:50:17.529317] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:28.260 [2024-10-15 04:50:17.529335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.529346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:28.260 [2024-10-15 04:50:17.529367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.772 ms 00:26:28.260 [2024-10-15 04:50:17.529378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.560673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.560756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:28.260 [2024-10-15 04:50:17.560774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.279 ms 00:26:28.260 [2024-10-15 04:50:17.560786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.579944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.580007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:28.260 [2024-10-15 04:50:17.580024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.059 ms 00:26:28.260 [2024-10-15 04:50:17.580035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.599264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.599325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:28.260 [2024-10-15 04:50:17.599342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.202 ms 00:26:28.260 [2024-10-15 04:50:17.599352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.600140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.600175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:28.260 [2024-10-15 04:50:17.600189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:26:28.260 [2024-10-15 04:50:17.600204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.688427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.688730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:28.260 [2024-10-15 04:50:17.688758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.333 ms 00:26:28.260 [2024-10-15 04:50:17.688779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.701674] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:28.260 [2024-10-15 04:50:17.705100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.705144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:28.260 [2024-10-15 04:50:17.705160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.261 ms 00:26:28.260 [2024-10-15 04:50:17.705171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.705283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.705296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:28.260 [2024-10-15 04:50:17.705308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:28.260 [2024-10-15 04:50:17.705322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.706228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.706259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:28.260 [2024-10-15 04:50:17.706271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:26:28.260 [2024-10-15 04:50:17.706282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.706317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.706329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:28.260 [2024-10-15 04:50:17.706339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:28.260 [2024-10-15 04:50:17.706349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.706385] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:28.260 [2024-10-15 04:50:17.706401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.706411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:28.260 [2024-10-15 04:50:17.706421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:28.260 [2024-10-15 04:50:17.706431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.745649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.745715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:28.260 [2024-10-15 04:50:17.745732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.259 ms 00:26:28.260 [2024-10-15 04:50:17.745752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.745868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.260 [2024-10-15 04:50:17.745906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:28.260 [2024-10-15 04:50:17.745918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:28.260 [2024-10-15 04:50:17.745929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.260 [2024-10-15 04:50:17.747197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.187 ms, result 0 00:26:29.635  [2024-10-15T04:50:20.074Z] Copying: 30/1024 [MB] (30 MBps) [2024-10-15T04:50:21.044Z] Copying: 59/1024 [MB] (28 MBps) [2024-10-15T04:50:22.000Z] Copying: 88/1024 [MB] (29 MBps) [2024-10-15T04:50:23.379Z] Copying: 118/1024 [MB] (30 MBps) [2024-10-15T04:50:24.316Z] Copying: 149/1024 [MB] (30 MBps) [2024-10-15T04:50:25.258Z] Copying: 179/1024 [MB] (30 MBps) [2024-10-15T04:50:26.194Z] Copying: 209/1024 [MB] (29 MBps) [2024-10-15T04:50:27.131Z] Copying: 240/1024 [MB] (30 MBps) [2024-10-15T04:50:28.126Z] Copying: 270/1024 [MB] (30 MBps) [2024-10-15T04:50:29.064Z] Copying: 299/1024 [MB] (28 MBps) [2024-10-15T04:50:30.001Z] Copying: 327/1024 [MB] (28 MBps) [2024-10-15T04:50:31.379Z] Copying: 358/1024 [MB] (31 MBps) [2024-10-15T04:50:32.316Z] Copying: 388/1024 [MB] (29 MBps) [2024-10-15T04:50:33.253Z] Copying: 416/1024 [MB] (28 MBps) [2024-10-15T04:50:34.190Z] Copying: 445/1024 [MB] (29 MBps) [2024-10-15T04:50:35.128Z] Copying: 474/1024 [MB] (28 MBps) [2024-10-15T04:50:36.065Z] Copying: 503/1024 [MB] (28 MBps) [2024-10-15T04:50:37.004Z] Copying: 533/1024 [MB] (29 MBps) [2024-10-15T04:50:37.941Z] Copying: 562/1024 [MB] (29 MBps) [2024-10-15T04:50:39.318Z] Copying: 594/1024 [MB] (31 MBps) [2024-10-15T04:50:40.253Z] Copying: 624/1024 [MB] (30 MBps) [2024-10-15T04:50:41.237Z] Copying: 651/1024 [MB] (26 MBps) [2024-10-15T04:50:42.174Z] Copying: 679/1024 [MB] (27 MBps) [2024-10-15T04:50:43.144Z] Copying: 706/1024 [MB] (26 MBps) [2024-10-15T04:50:44.080Z] Copying: 731/1024 [MB] (25 MBps) [2024-10-15T04:50:45.017Z] Copying: 758/1024 [MB] (26 MBps) [2024-10-15T04:50:45.954Z] Copying: 784/1024 [MB] (26 MBps) [2024-10-15T04:50:47.331Z] Copying: 811/1024 [MB] (27 MBps) [2024-10-15T04:50:48.275Z] Copying: 839/1024 [MB] (27 MBps) [2024-10-15T04:50:49.252Z] Copying: 866/1024 [MB] (27 MBps) [2024-10-15T04:50:50.192Z] Copying: 893/1024 [MB] (26 MBps) [2024-10-15T04:50:51.127Z] Copying: 920/1024 [MB] (27 MBps) [2024-10-15T04:50:52.062Z] Copying: 948/1024 [MB] (27 MBps) [2024-10-15T04:50:52.998Z] Copying: 977/1024 [MB] (28 MBps) [2024-10-15T04:50:53.934Z] Copying: 1005/1024 [MB] (27 MBps) [2024-10-15T04:50:53.934Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-10-15 04:50:53.596152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.430 [2024-10-15 04:50:53.596218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:04.430 [2024-10-15 04:50:53.596235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:04.430 [2024-10-15 04:50:53.596246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.430 [2024-10-15 04:50:53.596269] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:04.430 [2024-10-15 04:50:53.600839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.430 [2024-10-15 04:50:53.600871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:04.430 [2024-10-15 04:50:53.600890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.558 ms 00:27:04.430 [2024-10-15 04:50:53.600901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.430 [2024-10-15 04:50:53.601112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.430 [2024-10-15 04:50:53.601125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:04.430 [2024-10-15 04:50:53.601136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:27:04.430 [2024-10-15 04:50:53.601146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.430 [2024-10-15 04:50:53.603830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.430 [2024-10-15 04:50:53.603848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:04.430 [2024-10-15 04:50:53.603861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:27:04.430 [2024-10-15 04:50:53.603871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.430 [2024-10-15 04:50:53.608901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.430 [2024-10-15 04:50:53.608929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:04.430 [2024-10-15 04:50:53.608942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.016 ms 00:27:04.430 [2024-10-15 04:50:53.608951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.430 [2024-10-15 04:50:53.648497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.430 [2024-10-15 04:50:53.648703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:04.430 [2024-10-15 04:50:53.648789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.563 ms 00:27:04.430 [2024-10-15 04:50:53.648848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.430 [2024-10-15 04:50:53.671121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.430 [2024-10-15 04:50:53.671317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:04.430 [2024-10-15 04:50:53.671465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.231 ms 00:27:04.430 [2024-10-15 04:50:53.671503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.430 [2024-10-15 04:50:53.673573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.430 [2024-10-15 04:50:53.673726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:04.430 [2024-10-15 04:50:53.673810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.982 ms 00:27:04.430 [2024-10-15 04:50:53.673860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.430 [2024-10-15 04:50:53.712778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.430 [2024-10-15 04:50:53.713012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:04.430 [2024-10-15 04:50:53.713106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.932 ms 00:27:04.430 [2024-10-15 04:50:53.713145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.430 [2024-10-15 04:50:53.752012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.430 [2024-10-15 04:50:53.752230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:04.430 [2024-10-15 04:50:53.752308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.849 ms 00:27:04.430 [2024-10-15 04:50:53.752343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.430 [2024-10-15 04:50:53.789784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.431 [2024-10-15 04:50:53.789951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:04.431 [2024-10-15 04:50:53.790085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.427 ms 00:27:04.431 [2024-10-15 04:50:53.790122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.431 [2024-10-15 04:50:53.825532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.431 [2024-10-15 04:50:53.825705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:04.431 [2024-10-15 04:50:53.825783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.356 ms 00:27:04.431 [2024-10-15 04:50:53.825829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.431 [2024-10-15 04:50:53.825893] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:04.431 [2024-10-15 04:50:53.825981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:04.431 [2024-10-15 04:50:53.826043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:04.431 [2024-10-15 04:50:53.826126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.826178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.826256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.826307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.826355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.826431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.826576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.826665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.826713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.826760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.826807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.826948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.827993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:04.431 [2024-10-15 04:50:53.828259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:04.432 [2024-10-15 04:50:53.828391] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:04.432 [2024-10-15 04:50:53.828407] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f313f300-eab4-4dd8-837e-ccea041cb153 00:27:04.432 [2024-10-15 04:50:53.828419] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:04.432 [2024-10-15 04:50:53.828429] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:04.432 [2024-10-15 04:50:53.828438] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:04.432 [2024-10-15 04:50:53.828448] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:04.432 [2024-10-15 04:50:53.828457] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:04.432 [2024-10-15 04:50:53.828468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:04.432 [2024-10-15 04:50:53.828487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:04.432 [2024-10-15 04:50:53.828496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:04.432 [2024-10-15 04:50:53.828505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:04.432 [2024-10-15 04:50:53.828519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.432 [2024-10-15 04:50:53.828530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:04.432 [2024-10-15 04:50:53.828542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:27:04.432 [2024-10-15 04:50:53.828552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.432 [2024-10-15 04:50:53.848309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.432 [2024-10-15 04:50:53.848353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:04.432 [2024-10-15 04:50:53.848368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.741 ms 00:27:04.432 [2024-10-15 04:50:53.848378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.432 [2024-10-15 04:50:53.848997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.432 [2024-10-15 04:50:53.849010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:04.432 [2024-10-15 04:50:53.849027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:27:04.432 [2024-10-15 04:50:53.849037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.432 [2024-10-15 04:50:53.899765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.432 [2024-10-15 04:50:53.899831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:04.432 [2024-10-15 04:50:53.899847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.432 [2024-10-15 04:50:53.899858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.432 [2024-10-15 04:50:53.899926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.432 [2024-10-15 04:50:53.899937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:04.432 [2024-10-15 04:50:53.899970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.432 [2024-10-15 04:50:53.899980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.432 [2024-10-15 04:50:53.900061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.432 [2024-10-15 04:50:53.900074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:04.432 [2024-10-15 04:50:53.900085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.432 [2024-10-15 04:50:53.900095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.432 [2024-10-15 04:50:53.900112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.432 [2024-10-15 04:50:53.900122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:04.432 [2024-10-15 04:50:53.900132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.432 [2024-10-15 04:50:53.900147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.691 [2024-10-15 04:50:54.025789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.691 [2024-10-15 04:50:54.025864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:04.691 [2024-10-15 04:50:54.025881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.691 [2024-10-15 04:50:54.025892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.691 [2024-10-15 04:50:54.128492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.691 [2024-10-15 04:50:54.128557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:04.691 [2024-10-15 04:50:54.128578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.691 [2024-10-15 04:50:54.128588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.691 [2024-10-15 04:50:54.128683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.691 [2024-10-15 04:50:54.128696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:04.691 [2024-10-15 04:50:54.128707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.691 [2024-10-15 04:50:54.128717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.691 [2024-10-15 04:50:54.128761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.691 [2024-10-15 04:50:54.128772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:04.691 [2024-10-15 04:50:54.128782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.691 [2024-10-15 04:50:54.128792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.691 [2024-10-15 04:50:54.128933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.691 [2024-10-15 04:50:54.128948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:04.692 [2024-10-15 04:50:54.128958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.692 [2024-10-15 04:50:54.128968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.692 [2024-10-15 04:50:54.129006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.692 [2024-10-15 04:50:54.129019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:04.692 [2024-10-15 04:50:54.129029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.692 [2024-10-15 04:50:54.129039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.692 [2024-10-15 04:50:54.129096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.692 [2024-10-15 04:50:54.129108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:04.692 [2024-10-15 04:50:54.129118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.692 [2024-10-15 04:50:54.129135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.692 [2024-10-15 04:50:54.129179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.692 [2024-10-15 04:50:54.129191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:04.692 [2024-10-15 04:50:54.129201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.692 [2024-10-15 04:50:54.129211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.692 [2024-10-15 04:50:54.129334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.014 ms, result 0 00:27:06.085 00:27:06.085 00:27:06.085 04:50:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:07.469 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:07.469 04:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:07.469 04:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:07.469 04:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:07.469 04:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:07.728 04:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:07.986 04:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:07.986 04:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:07.986 Process with pid 78519 is not found 00:27:07.986 04:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78519 00:27:07.986 04:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78519 ']' 00:27:07.986 04:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 78519 00:27:07.986 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78519) - No such process 00:27:07.986 04:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 78519 is not found' 00:27:07.986 04:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:08.245 Remove shared memory files 00:27:08.245 04:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:08.245 04:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:08.245 04:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:08.245 04:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:08.245 04:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:08.245 04:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:08.245 04:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:08.245 ************************************ 00:27:08.245 END TEST ftl_dirty_shutdown 00:27:08.245 ************************************ 00:27:08.245 00:27:08.245 real 3m36.264s 00:27:08.245 user 4m4.758s 00:27:08.245 sys 0m40.190s 00:27:08.245 04:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:08.245 04:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:08.245 04:50:57 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:08.245 04:50:57 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:08.245 04:50:57 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:08.245 04:50:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:08.245 ************************************ 00:27:08.245 START TEST ftl_upgrade_shutdown 00:27:08.245 ************************************ 00:27:08.245 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:08.245 * Looking for test storage... 00:27:08.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:08.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.504 --rc genhtml_branch_coverage=1 00:27:08.504 --rc genhtml_function_coverage=1 00:27:08.504 --rc genhtml_legend=1 00:27:08.504 --rc geninfo_all_blocks=1 00:27:08.504 --rc geninfo_unexecuted_blocks=1 00:27:08.504 00:27:08.504 ' 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:08.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.504 --rc genhtml_branch_coverage=1 00:27:08.504 --rc genhtml_function_coverage=1 00:27:08.504 --rc genhtml_legend=1 00:27:08.504 --rc geninfo_all_blocks=1 00:27:08.504 --rc geninfo_unexecuted_blocks=1 00:27:08.504 00:27:08.504 ' 00:27:08.504 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:08.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.504 --rc genhtml_branch_coverage=1 00:27:08.504 --rc genhtml_function_coverage=1 00:27:08.504 --rc genhtml_legend=1 00:27:08.504 --rc geninfo_all_blocks=1 00:27:08.504 --rc geninfo_unexecuted_blocks=1 00:27:08.505 00:27:08.505 ' 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:08.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.505 --rc genhtml_branch_coverage=1 00:27:08.505 --rc genhtml_function_coverage=1 00:27:08.505 --rc genhtml_legend=1 00:27:08.505 --rc geninfo_all_blocks=1 00:27:08.505 --rc geninfo_unexecuted_blocks=1 00:27:08.505 00:27:08.505 ' 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80834 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80834 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80834 ']' 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:08.505 04:50:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:08.505 [2024-10-15 04:50:58.006866] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:27:08.505 [2024-10-15 04:50:58.007123] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80834 ] 00:27:08.763 [2024-10-15 04:50:58.179707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.022 [2024-10-15 04:50:58.296224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:09.958 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:10.217 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:10.217 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:10.217 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:10.217 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:10.217 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:10.217 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:10.217 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:10.217 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:10.217 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:10.217 { 00:27:10.217 "name": "basen1", 00:27:10.217 "aliases": [ 00:27:10.217 "483ef45c-bc81-4343-ae66-ef16e48c9639" 00:27:10.217 ], 00:27:10.217 "product_name": "NVMe disk", 00:27:10.217 "block_size": 4096, 00:27:10.217 "num_blocks": 1310720, 00:27:10.217 "uuid": "483ef45c-bc81-4343-ae66-ef16e48c9639", 00:27:10.217 "numa_id": -1, 00:27:10.217 "assigned_rate_limits": { 00:27:10.217 "rw_ios_per_sec": 0, 00:27:10.217 "rw_mbytes_per_sec": 0, 00:27:10.217 "r_mbytes_per_sec": 0, 00:27:10.217 "w_mbytes_per_sec": 0 00:27:10.217 }, 00:27:10.217 "claimed": true, 00:27:10.217 "claim_type": "read_many_write_one", 00:27:10.217 "zoned": false, 00:27:10.217 "supported_io_types": { 00:27:10.217 "read": true, 00:27:10.218 "write": true, 00:27:10.218 "unmap": true, 00:27:10.218 "flush": true, 00:27:10.218 "reset": true, 00:27:10.218 "nvme_admin": true, 00:27:10.218 "nvme_io": true, 00:27:10.218 "nvme_io_md": false, 00:27:10.218 "write_zeroes": true, 00:27:10.218 "zcopy": false, 00:27:10.218 "get_zone_info": false, 00:27:10.218 "zone_management": false, 00:27:10.218 "zone_append": false, 00:27:10.218 "compare": true, 00:27:10.218 "compare_and_write": false, 00:27:10.218 "abort": true, 00:27:10.218 "seek_hole": false, 00:27:10.218 "seek_data": false, 00:27:10.218 "copy": true, 00:27:10.218 "nvme_iov_md": false 00:27:10.218 }, 00:27:10.218 "driver_specific": { 00:27:10.218 "nvme": [ 00:27:10.218 { 00:27:10.218 "pci_address": "0000:00:11.0", 00:27:10.218 "trid": { 00:27:10.218 "trtype": "PCIe", 00:27:10.218 "traddr": "0000:00:11.0" 00:27:10.218 }, 00:27:10.218 "ctrlr_data": { 00:27:10.218 "cntlid": 0, 00:27:10.218 "vendor_id": "0x1b36", 00:27:10.218 "model_number": "QEMU NVMe Ctrl", 00:27:10.218 "serial_number": "12341", 00:27:10.218 "firmware_revision": "8.0.0", 00:27:10.218 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:10.218 "oacs": { 00:27:10.218 "security": 0, 00:27:10.218 "format": 1, 00:27:10.218 "firmware": 0, 00:27:10.218 "ns_manage": 1 00:27:10.218 }, 00:27:10.218 "multi_ctrlr": false, 00:27:10.218 "ana_reporting": false 00:27:10.218 }, 00:27:10.218 "vs": { 00:27:10.218 "nvme_version": "1.4" 00:27:10.218 }, 00:27:10.218 "ns_data": { 00:27:10.218 "id": 1, 00:27:10.218 "can_share": false 00:27:10.218 } 00:27:10.218 } 00:27:10.218 ], 00:27:10.218 "mp_policy": "active_passive" 00:27:10.218 } 00:27:10.218 } 00:27:10.218 ]' 00:27:10.218 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:10.476 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:10.476 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:10.476 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:10.476 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:10.476 04:50:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:10.476 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:10.476 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:10.476 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:10.477 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:10.477 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:10.735 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a8e05932-182e-4c17-8781-8bfe1c81fef8 00:27:10.735 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:10.735 04:50:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a8e05932-182e-4c17-8781-8bfe1c81fef8 00:27:10.736 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:10.994 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=7c7dc58d-7c96-4f6c-ad15-75c7d104804f 00:27:10.994 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 7c7dc58d-7c96-4f6c-ad15-75c7d104804f 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=74f2c7dc-c802-4d4f-a05a-bf9db5c3c836 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 74f2c7dc-c802-4d4f-a05a-bf9db5c3c836 ]] 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 74f2c7dc-c802-4d4f-a05a-bf9db5c3c836 5120 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=74f2c7dc-c802-4d4f-a05a-bf9db5c3c836 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 74f2c7dc-c802-4d4f-a05a-bf9db5c3c836 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=74f2c7dc-c802-4d4f-a05a-bf9db5c3c836 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:11.253 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74f2c7dc-c802-4d4f-a05a-bf9db5c3c836 00:27:11.512 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:11.512 { 00:27:11.512 "name": "74f2c7dc-c802-4d4f-a05a-bf9db5c3c836", 00:27:11.512 "aliases": [ 00:27:11.512 "lvs/basen1p0" 00:27:11.512 ], 00:27:11.512 "product_name": "Logical Volume", 00:27:11.512 "block_size": 4096, 00:27:11.512 "num_blocks": 5242880, 00:27:11.512 "uuid": "74f2c7dc-c802-4d4f-a05a-bf9db5c3c836", 00:27:11.512 "assigned_rate_limits": { 00:27:11.512 "rw_ios_per_sec": 0, 00:27:11.512 "rw_mbytes_per_sec": 0, 00:27:11.512 "r_mbytes_per_sec": 0, 00:27:11.512 "w_mbytes_per_sec": 0 00:27:11.512 }, 00:27:11.512 "claimed": false, 00:27:11.512 "zoned": false, 00:27:11.512 "supported_io_types": { 00:27:11.512 "read": true, 00:27:11.512 "write": true, 00:27:11.512 "unmap": true, 00:27:11.512 "flush": false, 00:27:11.512 "reset": true, 00:27:11.512 "nvme_admin": false, 00:27:11.512 "nvme_io": false, 00:27:11.512 "nvme_io_md": false, 00:27:11.512 "write_zeroes": true, 00:27:11.512 "zcopy": false, 00:27:11.512 "get_zone_info": false, 00:27:11.512 "zone_management": false, 00:27:11.512 "zone_append": false, 00:27:11.512 "compare": false, 00:27:11.512 "compare_and_write": false, 00:27:11.512 "abort": false, 00:27:11.512 "seek_hole": true, 00:27:11.512 "seek_data": true, 00:27:11.512 "copy": false, 00:27:11.512 "nvme_iov_md": false 00:27:11.512 }, 00:27:11.512 "driver_specific": { 00:27:11.512 "lvol": { 00:27:11.512 "lvol_store_uuid": "7c7dc58d-7c96-4f6c-ad15-75c7d104804f", 00:27:11.512 "base_bdev": "basen1", 00:27:11.512 "thin_provision": true, 00:27:11.512 "num_allocated_clusters": 0, 00:27:11.512 "snapshot": false, 00:27:11.512 "clone": false, 00:27:11.512 "esnap_clone": false 00:27:11.512 } 00:27:11.512 } 00:27:11.512 } 00:27:11.512 ]' 00:27:11.512 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:11.512 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:11.512 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:11.512 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:11.512 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:11.512 04:51:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:11.512 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:11.512 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:11.513 04:51:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:11.771 04:51:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:11.771 04:51:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:11.771 04:51:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:12.030 04:51:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:12.030 04:51:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:12.030 04:51:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 74f2c7dc-c802-4d4f-a05a-bf9db5c3c836 -c cachen1p0 --l2p_dram_limit 2 00:27:12.289 [2024-10-15 04:51:01.553652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.289 [2024-10-15 04:51:01.553710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:12.289 [2024-10-15 04:51:01.553729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:12.289 [2024-10-15 04:51:01.553740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.289 [2024-10-15 04:51:01.553802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.289 [2024-10-15 04:51:01.553834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:12.289 [2024-10-15 04:51:01.553849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:12.289 [2024-10-15 04:51:01.553860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.289 [2024-10-15 04:51:01.553884] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:12.289 [2024-10-15 04:51:01.554884] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:12.289 [2024-10-15 04:51:01.554926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.289 [2024-10-15 04:51:01.554939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:12.289 [2024-10-15 04:51:01.554954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.045 ms 00:27:12.289 [2024-10-15 04:51:01.554965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.289 [2024-10-15 04:51:01.555046] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 76c1b055-8658-4083-b34a-dd57ff62c762 00:27:12.289 [2024-10-15 04:51:01.556461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.289 [2024-10-15 04:51:01.556498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:12.289 [2024-10-15 04:51:01.556512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:12.290 [2024-10-15 04:51:01.556525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.290 [2024-10-15 04:51:01.563919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.290 [2024-10-15 04:51:01.563952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:12.290 [2024-10-15 04:51:01.563965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.362 ms 00:27:12.290 [2024-10-15 04:51:01.563981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.290 [2024-10-15 04:51:01.564030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.290 [2024-10-15 04:51:01.564047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:12.290 [2024-10-15 04:51:01.564059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:12.290 [2024-10-15 04:51:01.564074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.290 [2024-10-15 04:51:01.564143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.290 [2024-10-15 04:51:01.564159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:12.290 [2024-10-15 04:51:01.564169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:12.290 [2024-10-15 04:51:01.564182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.290 [2024-10-15 04:51:01.564210] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:12.290 [2024-10-15 04:51:01.569258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.290 [2024-10-15 04:51:01.569432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:12.290 [2024-10-15 04:51:01.569469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.063 ms 00:27:12.290 [2024-10-15 04:51:01.569494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.290 [2024-10-15 04:51:01.569536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.290 [2024-10-15 04:51:01.569548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:12.290 [2024-10-15 04:51:01.569562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:12.290 [2024-10-15 04:51:01.569572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.290 [2024-10-15 04:51:01.569611] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:12.290 [2024-10-15 04:51:01.569739] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:12.290 [2024-10-15 04:51:01.569759] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:12.290 [2024-10-15 04:51:01.569773] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:12.290 [2024-10-15 04:51:01.569789] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:12.290 [2024-10-15 04:51:01.569801] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:12.290 [2024-10-15 04:51:01.569831] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:12.290 [2024-10-15 04:51:01.569843] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:12.290 [2024-10-15 04:51:01.569855] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:12.290 [2024-10-15 04:51:01.569865] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:12.290 [2024-10-15 04:51:01.569882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.290 [2024-10-15 04:51:01.569892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:12.290 [2024-10-15 04:51:01.569905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:27:12.290 [2024-10-15 04:51:01.569915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.290 [2024-10-15 04:51:01.569991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.290 [2024-10-15 04:51:01.570002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:12.290 [2024-10-15 04:51:01.570017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:12.290 [2024-10-15 04:51:01.570038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.290 [2024-10-15 04:51:01.570127] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:12.290 [2024-10-15 04:51:01.570142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:12.290 [2024-10-15 04:51:01.570155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:12.290 [2024-10-15 04:51:01.570166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:12.290 [2024-10-15 04:51:01.570188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:12.290 [2024-10-15 04:51:01.570210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:12.290 [2024-10-15 04:51:01.570222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:12.290 [2024-10-15 04:51:01.570232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:12.290 [2024-10-15 04:51:01.570253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:12.290 [2024-10-15 04:51:01.570264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:12.290 [2024-10-15 04:51:01.570285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:12.290 [2024-10-15 04:51:01.570294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:12.290 [2024-10-15 04:51:01.570326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:12.290 [2024-10-15 04:51:01.570344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:12.290 [2024-10-15 04:51:01.570379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:12.290 [2024-10-15 04:51:01.570389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:12.290 [2024-10-15 04:51:01.570400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:12.290 [2024-10-15 04:51:01.570410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:12.290 [2024-10-15 04:51:01.570423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:12.290 [2024-10-15 04:51:01.570432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:12.290 [2024-10-15 04:51:01.570444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:12.290 [2024-10-15 04:51:01.570453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:12.290 [2024-10-15 04:51:01.570465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:12.290 [2024-10-15 04:51:01.570474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:12.290 [2024-10-15 04:51:01.570486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:12.290 [2024-10-15 04:51:01.570496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:12.290 [2024-10-15 04:51:01.570511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:12.290 [2024-10-15 04:51:01.570520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:12.290 [2024-10-15 04:51:01.570542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:12.290 [2024-10-15 04:51:01.570553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:12.290 [2024-10-15 04:51:01.570575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:12.290 [2024-10-15 04:51:01.570627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:12.290 [2024-10-15 04:51:01.570642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570651] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:12.290 [2024-10-15 04:51:01.570665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:12.290 [2024-10-15 04:51:01.570675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:12.290 [2024-10-15 04:51:01.570687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.290 [2024-10-15 04:51:01.570697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:12.290 [2024-10-15 04:51:01.570714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:12.290 [2024-10-15 04:51:01.570723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:12.290 [2024-10-15 04:51:01.570735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:12.290 [2024-10-15 04:51:01.570745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:12.290 [2024-10-15 04:51:01.570757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:12.290 [2024-10-15 04:51:01.570771] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:12.290 [2024-10-15 04:51:01.570787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:12.290 [2024-10-15 04:51:01.570800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:12.290 [2024-10-15 04:51:01.570825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:12.290 [2024-10-15 04:51:01.570837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:12.290 [2024-10-15 04:51:01.570850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:12.290 [2024-10-15 04:51:01.570861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:12.290 [2024-10-15 04:51:01.570874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:12.290 [2024-10-15 04:51:01.570884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:12.290 [2024-10-15 04:51:01.570897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:12.290 [2024-10-15 04:51:01.570908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:12.290 [2024-10-15 04:51:01.570924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:12.290 [2024-10-15 04:51:01.570934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:12.291 [2024-10-15 04:51:01.570947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:12.291 [2024-10-15 04:51:01.570959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:12.291 [2024-10-15 04:51:01.570980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:12.291 [2024-10-15 04:51:01.570997] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:12.291 [2024-10-15 04:51:01.571020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:12.291 [2024-10-15 04:51:01.571037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:12.291 [2024-10-15 04:51:01.571051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:12.291 [2024-10-15 04:51:01.571067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:12.291 [2024-10-15 04:51:01.571089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:12.291 [2024-10-15 04:51:01.571108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.291 [2024-10-15 04:51:01.571122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:12.291 [2024-10-15 04:51:01.571132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.040 ms 00:27:12.291 [2024-10-15 04:51:01.571145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.291 [2024-10-15 04:51:01.571190] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:12.291 [2024-10-15 04:51:01.571208] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:15.580 [2024-10-15 04:51:04.790107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.580 [2024-10-15 04:51:04.790194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:15.580 [2024-10-15 04:51:04.790213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3224.141 ms 00:27:15.580 [2024-10-15 04:51:04.790227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.580 [2024-10-15 04:51:04.830056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.580 [2024-10-15 04:51:04.830111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:15.580 [2024-10-15 04:51:04.830128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.571 ms 00:27:15.580 [2024-10-15 04:51:04.830158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.580 [2024-10-15 04:51:04.830242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.580 [2024-10-15 04:51:04.830259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:15.580 [2024-10-15 04:51:04.830271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:15.580 [2024-10-15 04:51:04.830287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.580 [2024-10-15 04:51:04.874045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.580 [2024-10-15 04:51:04.874105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:15.580 [2024-10-15 04:51:04.874121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.771 ms 00:27:15.580 [2024-10-15 04:51:04.874135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.580 [2024-10-15 04:51:04.874180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.580 [2024-10-15 04:51:04.874196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:15.580 [2024-10-15 04:51:04.874208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:15.580 [2024-10-15 04:51:04.874224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.581 [2024-10-15 04:51:04.874713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.581 [2024-10-15 04:51:04.874730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:15.581 [2024-10-15 04:51:04.874741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:27:15.581 [2024-10-15 04:51:04.874754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.581 [2024-10-15 04:51:04.874802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.581 [2024-10-15 04:51:04.874816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:15.581 [2024-10-15 04:51:04.874863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:15.581 [2024-10-15 04:51:04.874880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.581 [2024-10-15 04:51:04.895701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.581 [2024-10-15 04:51:04.895759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:15.581 [2024-10-15 04:51:04.895777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.828 ms 00:27:15.581 [2024-10-15 04:51:04.895793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.581 [2024-10-15 04:51:04.920679] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:15.581 [2024-10-15 04:51:04.921903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.581 [2024-10-15 04:51:04.921936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:15.581 [2024-10-15 04:51:04.921954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.020 ms 00:27:15.581 [2024-10-15 04:51:04.921964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.581 [2024-10-15 04:51:04.954636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.581 [2024-10-15 04:51:04.954679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:15.581 [2024-10-15 04:51:04.954697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.680 ms 00:27:15.581 [2024-10-15 04:51:04.954708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.581 [2024-10-15 04:51:04.954801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.581 [2024-10-15 04:51:04.954829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:15.581 [2024-10-15 04:51:04.954846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:27:15.581 [2024-10-15 04:51:04.954890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.581 [2024-10-15 04:51:04.990683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.581 [2024-10-15 04:51:04.990861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:15.581 [2024-10-15 04:51:04.990906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.793 ms 00:27:15.581 [2024-10-15 04:51:04.990918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.581 [2024-10-15 04:51:05.026857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.581 [2024-10-15 04:51:05.026894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:15.581 [2024-10-15 04:51:05.026911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.950 ms 00:27:15.581 [2024-10-15 04:51:05.026921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.581 [2024-10-15 04:51:05.027561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.581 [2024-10-15 04:51:05.027580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:15.581 [2024-10-15 04:51:05.027594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.599 ms 00:27:15.581 [2024-10-15 04:51:05.027605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.840 [2024-10-15 04:51:05.127298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.840 [2024-10-15 04:51:05.127510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:15.840 [2024-10-15 04:51:05.127543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.797 ms 00:27:15.840 [2024-10-15 04:51:05.127554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.840 [2024-10-15 04:51:05.165651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.840 [2024-10-15 04:51:05.165694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:15.840 [2024-10-15 04:51:05.165726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.043 ms 00:27:15.840 [2024-10-15 04:51:05.165737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.840 [2024-10-15 04:51:05.202375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.840 [2024-10-15 04:51:05.202416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:15.840 [2024-10-15 04:51:05.202434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.649 ms 00:27:15.840 [2024-10-15 04:51:05.202444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.840 [2024-10-15 04:51:05.239913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.840 [2024-10-15 04:51:05.239954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:15.840 [2024-10-15 04:51:05.239971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.482 ms 00:27:15.840 [2024-10-15 04:51:05.239982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.840 [2024-10-15 04:51:05.240030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.840 [2024-10-15 04:51:05.240042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:15.840 [2024-10-15 04:51:05.240059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:15.840 [2024-10-15 04:51:05.240069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.840 [2024-10-15 04:51:05.240184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.840 [2024-10-15 04:51:05.240197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:15.840 [2024-10-15 04:51:05.240211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:15.840 [2024-10-15 04:51:05.240221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.840 [2024-10-15 04:51:05.241244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3693.089 ms, result 0 00:27:15.840 { 00:27:15.840 "name": "ftl", 00:27:15.840 "uuid": "76c1b055-8658-4083-b34a-dd57ff62c762" 00:27:15.840 } 00:27:15.840 04:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:16.099 [2024-10-15 04:51:05.472078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.099 04:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:16.358 04:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:16.617 [2024-10-15 04:51:05.883906] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:16.617 04:51:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:16.617 [2024-10-15 04:51:06.093401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:16.617 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:17.186 Fill FTL, iteration 1 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80966 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80966 /var/tmp/spdk.tgt.sock 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80966 ']' 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:17.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:17.186 04:51:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:17.186 [2024-10-15 04:51:06.570668] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:27:17.186 [2024-10-15 04:51:06.571495] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80966 ] 00:27:17.445 [2024-10-15 04:51:06.736660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.446 [2024-10-15 04:51:06.864794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.382 04:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:18.382 04:51:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:18.382 04:51:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:18.641 ftln1 00:27:18.641 04:51:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:18.641 04:51:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:18.899 04:51:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:18.900 04:51:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80966 00:27:18.900 04:51:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80966 ']' 00:27:18.900 04:51:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80966 00:27:18.900 04:51:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:18.900 04:51:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.900 04:51:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80966 00:27:18.900 killing process with pid 80966 00:27:18.900 04:51:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:18.900 04:51:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:18.900 04:51:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80966' 00:27:18.900 04:51:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80966 00:27:18.900 04:51:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80966 00:27:21.442 04:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:21.442 04:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:21.442 [2024-10-15 04:51:10.690908] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:27:21.442 [2024-10-15 04:51:10.691163] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81023 ] 00:27:21.442 [2024-10-15 04:51:10.860206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.701 [2024-10-15 04:51:10.977769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.077  [2024-10-15T04:51:13.517Z] Copying: 246/1024 [MB] (246 MBps) [2024-10-15T04:51:14.455Z] Copying: 475/1024 [MB] (229 MBps) [2024-10-15T04:51:15.830Z] Copying: 717/1024 [MB] (242 MBps) [2024-10-15T04:51:15.830Z] Copying: 962/1024 [MB] (245 MBps) [2024-10-15T04:51:17.229Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:27:27.725 00:27:27.725 Calculate MD5 checksum, iteration 1 00:27:27.725 04:51:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:27.725 04:51:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:27.725 04:51:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:27.725 04:51:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:27.725 04:51:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:27.725 04:51:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:27.725 04:51:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:27.725 04:51:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:27.725 [2024-10-15 04:51:16.946917] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:27:27.725 [2024-10-15 04:51:16.947276] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81087 ] 00:27:27.725 [2024-10-15 04:51:17.126644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.986 [2024-10-15 04:51:17.243912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.361  [2024-10-15T04:51:19.432Z] Copying: 708/1024 [MB] (708 MBps) [2024-10-15T04:51:20.368Z] Copying: 1024/1024 [MB] (average 695 MBps) 00:27:30.864 00:27:30.864 04:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:30.864 04:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:32.767 04:51:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:32.767 04:51:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2e1862f958c440870292ee5c0bcbb24e 00:27:32.767 Fill FTL, iteration 2 00:27:32.767 04:51:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:32.767 04:51:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:32.767 04:51:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:32.767 04:51:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:32.767 04:51:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:32.767 04:51:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:32.767 04:51:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:32.767 04:51:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:32.768 04:51:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:32.768 [2024-10-15 04:51:21.897422] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:27:32.768 [2024-10-15 04:51:21.897751] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81145 ] 00:27:32.768 [2024-10-15 04:51:22.073371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.768 [2024-10-15 04:51:22.190411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.144  [2024-10-15T04:51:25.025Z] Copying: 246/1024 [MB] (246 MBps) [2024-10-15T04:51:25.963Z] Copying: 495/1024 [MB] (249 MBps) [2024-10-15T04:51:26.899Z] Copying: 746/1024 [MB] (251 MBps) [2024-10-15T04:51:26.899Z] Copying: 995/1024 [MB] (249 MBps) [2024-10-15T04:51:28.276Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:27:38.772 00:27:38.772 04:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:38.772 04:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:38.772 Calculate MD5 checksum, iteration 2 00:27:38.772 04:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:38.772 04:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:38.772 04:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:38.772 04:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:38.772 04:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:38.772 04:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:38.772 [2024-10-15 04:51:28.010345] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:27:38.772 [2024-10-15 04:51:28.010657] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81209 ] 00:27:38.772 [2024-10-15 04:51:28.182642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.030 [2024-10-15 04:51:28.301239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.935  [2024-10-15T04:51:30.698Z] Copying: 700/1024 [MB] (700 MBps) [2024-10-15T04:51:32.076Z] Copying: 1024/1024 [MB] (average 661 MBps) 00:27:42.572 00:27:42.572 04:51:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:42.572 04:51:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:44.506 04:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:44.506 04:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4b75dad65c75e9398d7f759a102aed60 00:27:44.506 04:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:44.506 04:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:44.506 04:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:44.506 [2024-10-15 04:51:33.671132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.506 [2024-10-15 04:51:33.671216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:44.506 [2024-10-15 04:51:33.671237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:44.506 [2024-10-15 04:51:33.671248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.506 [2024-10-15 04:51:33.671281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.506 [2024-10-15 04:51:33.671295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:44.506 [2024-10-15 04:51:33.671307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:44.506 [2024-10-15 04:51:33.671318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.506 [2024-10-15 04:51:33.671345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.506 [2024-10-15 04:51:33.671357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:44.506 [2024-10-15 04:51:33.671368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:44.506 [2024-10-15 04:51:33.671379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.506 [2024-10-15 04:51:33.671448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.326 ms, result 0 00:27:44.506 true 00:27:44.506 04:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:44.506 { 00:27:44.506 "name": "ftl", 00:27:44.506 "properties": [ 00:27:44.506 { 00:27:44.506 "name": "superblock_version", 00:27:44.506 "value": 5, 00:27:44.506 "read-only": true 00:27:44.506 }, 00:27:44.506 { 00:27:44.506 "name": "base_device", 00:27:44.506 "bands": [ 00:27:44.506 { 00:27:44.506 "id": 0, 00:27:44.506 "state": "FREE", 00:27:44.506 "validity": 0.0 00:27:44.506 }, 00:27:44.506 { 00:27:44.506 "id": 1, 00:27:44.506 "state": "FREE", 00:27:44.506 "validity": 0.0 00:27:44.506 }, 00:27:44.506 { 00:27:44.506 "id": 2, 00:27:44.506 "state": "FREE", 00:27:44.506 "validity": 0.0 00:27:44.506 }, 00:27:44.506 { 00:27:44.506 "id": 3, 00:27:44.506 "state": "FREE", 00:27:44.506 "validity": 0.0 00:27:44.506 }, 00:27:44.506 { 00:27:44.506 "id": 4, 00:27:44.506 "state": "FREE", 00:27:44.506 "validity": 0.0 00:27:44.506 }, 00:27:44.506 { 00:27:44.506 "id": 5, 00:27:44.506 "state": "FREE", 00:27:44.506 "validity": 0.0 00:27:44.506 }, 00:27:44.506 { 00:27:44.507 "id": 6, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 7, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 8, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 9, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 10, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 11, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 12, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 13, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 14, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 15, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 16, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 17, 00:27:44.507 "state": "FREE", 00:27:44.507 "validity": 0.0 00:27:44.507 } 00:27:44.507 ], 00:27:44.507 "read-only": true 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "name": "cache_device", 00:27:44.507 "type": "bdev", 00:27:44.507 "chunks": [ 00:27:44.507 { 00:27:44.507 "id": 0, 00:27:44.507 "state": "INACTIVE", 00:27:44.507 "utilization": 0.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 1, 00:27:44.507 "state": "CLOSED", 00:27:44.507 "utilization": 1.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 2, 00:27:44.507 "state": "CLOSED", 00:27:44.507 "utilization": 1.0 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 3, 00:27:44.507 "state": "OPEN", 00:27:44.507 "utilization": 0.001953125 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "id": 4, 00:27:44.507 "state": "OPEN", 00:27:44.507 "utilization": 0.0 00:27:44.507 } 00:27:44.507 ], 00:27:44.507 "read-only": true 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "name": "verbose_mode", 00:27:44.507 "value": true, 00:27:44.507 "unit": "", 00:27:44.507 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:44.507 }, 00:27:44.507 { 00:27:44.507 "name": "prep_upgrade_on_shutdown", 00:27:44.507 "value": false, 00:27:44.507 "unit": "", 00:27:44.507 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:44.507 } 00:27:44.507 ] 00:27:44.507 } 00:27:44.507 04:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:44.766 [2024-10-15 04:51:34.075087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.766 [2024-10-15 04:51:34.075354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:44.766 [2024-10-15 04:51:34.075385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:44.766 [2024-10-15 04:51:34.075397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.766 [2024-10-15 04:51:34.075447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.766 [2024-10-15 04:51:34.075460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:44.766 [2024-10-15 04:51:34.075472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:44.766 [2024-10-15 04:51:34.075483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.766 [2024-10-15 04:51:34.075504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.766 [2024-10-15 04:51:34.075515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:44.766 [2024-10-15 04:51:34.075526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:44.766 [2024-10-15 04:51:34.075536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.766 [2024-10-15 04:51:34.075608] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.509 ms, result 0 00:27:44.766 true 00:27:44.766 04:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:44.766 04:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:44.766 04:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:45.026 04:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:45.026 04:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:45.026 04:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:45.026 [2024-10-15 04:51:34.503039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.026 [2024-10-15 04:51:34.503295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:45.026 [2024-10-15 04:51:34.503406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:45.026 [2024-10-15 04:51:34.503445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.026 [2024-10-15 04:51:34.503508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.026 [2024-10-15 04:51:34.503543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:45.026 [2024-10-15 04:51:34.503574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:45.026 [2024-10-15 04:51:34.503605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.026 [2024-10-15 04:51:34.503646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.026 [2024-10-15 04:51:34.503839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:45.026 [2024-10-15 04:51:34.503910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:45.026 [2024-10-15 04:51:34.503946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.026 [2024-10-15 04:51:34.504045] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.988 ms, result 0 00:27:45.026 true 00:27:45.026 04:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:45.285 { 00:27:45.285 "name": "ftl", 00:27:45.285 "properties": [ 00:27:45.285 { 00:27:45.285 "name": "superblock_version", 00:27:45.285 "value": 5, 00:27:45.285 "read-only": true 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "name": "base_device", 00:27:45.285 "bands": [ 00:27:45.285 { 00:27:45.285 "id": 0, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 1, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 2, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 3, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 4, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 5, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 6, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 7, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 8, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 9, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 10, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 11, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 12, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 13, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 14, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 15, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 16, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 17, 00:27:45.285 "state": "FREE", 00:27:45.285 "validity": 0.0 00:27:45.285 } 00:27:45.285 ], 00:27:45.285 "read-only": true 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "name": "cache_device", 00:27:45.285 "type": "bdev", 00:27:45.285 "chunks": [ 00:27:45.285 { 00:27:45.285 "id": 0, 00:27:45.285 "state": "INACTIVE", 00:27:45.285 "utilization": 0.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 1, 00:27:45.285 "state": "CLOSED", 00:27:45.285 "utilization": 1.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 2, 00:27:45.285 "state": "CLOSED", 00:27:45.285 "utilization": 1.0 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 3, 00:27:45.285 "state": "OPEN", 00:27:45.285 "utilization": 0.001953125 00:27:45.285 }, 00:27:45.285 { 00:27:45.285 "id": 4, 00:27:45.285 "state": "OPEN", 00:27:45.286 "utilization": 0.0 00:27:45.286 } 00:27:45.286 ], 00:27:45.286 "read-only": true 00:27:45.286 }, 00:27:45.286 { 00:27:45.286 "name": "verbose_mode", 00:27:45.286 "value": true, 00:27:45.286 "unit": "", 00:27:45.286 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:45.286 }, 00:27:45.286 { 00:27:45.286 "name": "prep_upgrade_on_shutdown", 00:27:45.286 "value": true, 00:27:45.286 "unit": "", 00:27:45.286 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:45.286 } 00:27:45.286 ] 00:27:45.286 } 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80834 ]] 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80834 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80834 ']' 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80834 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80834 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80834' 00:27:45.286 killing process with pid 80834 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80834 00:27:45.286 04:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80834 00:27:46.664 [2024-10-15 04:51:36.000811] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:46.664 [2024-10-15 04:51:36.022399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.664 [2024-10-15 04:51:36.022459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:46.664 [2024-10-15 04:51:36.022477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:46.664 [2024-10-15 04:51:36.022490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.664 [2024-10-15 04:51:36.022515] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:46.664 [2024-10-15 04:51:36.027324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.664 [2024-10-15 04:51:36.027357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:46.664 [2024-10-15 04:51:36.027371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.799 ms 00:27:46.664 [2024-10-15 04:51:36.027382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.789 [2024-10-15 04:51:43.077202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.789 [2024-10-15 04:51:43.077517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:54.789 [2024-10-15 04:51:43.077550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7061.230 ms 00:27:54.789 [2024-10-15 04:51:43.077563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.789 [2024-10-15 04:51:43.078689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.789 [2024-10-15 04:51:43.078725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:54.789 [2024-10-15 04:51:43.078738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.097 ms 00:27:54.789 [2024-10-15 04:51:43.078750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.789 [2024-10-15 04:51:43.079677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.789 [2024-10-15 04:51:43.079697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:54.789 [2024-10-15 04:51:43.079710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.895 ms 00:27:54.789 [2024-10-15 04:51:43.079721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.789 [2024-10-15 04:51:43.095310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.789 [2024-10-15 04:51:43.095353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:54.789 [2024-10-15 04:51:43.095368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.573 ms 00:27:54.789 [2024-10-15 04:51:43.095380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.789 [2024-10-15 04:51:43.104465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.789 [2024-10-15 04:51:43.104508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:54.789 [2024-10-15 04:51:43.104524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.061 ms 00:27:54.789 [2024-10-15 04:51:43.104536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.789 [2024-10-15 04:51:43.104643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.789 [2024-10-15 04:51:43.104657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:54.789 [2024-10-15 04:51:43.104671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:27:54.789 [2024-10-15 04:51:43.104683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.789 [2024-10-15 04:51:43.119725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.789 [2024-10-15 04:51:43.119937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:54.789 [2024-10-15 04:51:43.119963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.046 ms 00:27:54.789 [2024-10-15 04:51:43.119974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.789 [2024-10-15 04:51:43.134617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.789 [2024-10-15 04:51:43.134656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:54.789 [2024-10-15 04:51:43.134671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.624 ms 00:27:54.789 [2024-10-15 04:51:43.134681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.789 [2024-10-15 04:51:43.149210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.790 [2024-10-15 04:51:43.149245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:54.790 [2024-10-15 04:51:43.149259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.513 ms 00:27:54.790 [2024-10-15 04:51:43.149270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.163468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.790 [2024-10-15 04:51:43.163525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:54.790 [2024-10-15 04:51:43.163539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.131 ms 00:27:54.790 [2024-10-15 04:51:43.163550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.163589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:54.790 [2024-10-15 04:51:43.163616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:54.790 [2024-10-15 04:51:43.163631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:54.790 [2024-10-15 04:51:43.163658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:54.790 [2024-10-15 04:51:43.163670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:54.790 [2024-10-15 04:51:43.163859] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:54.790 [2024-10-15 04:51:43.163871] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 76c1b055-8658-4083-b34a-dd57ff62c762 00:27:54.790 [2024-10-15 04:51:43.163884] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:54.790 [2024-10-15 04:51:43.163895] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:54.790 [2024-10-15 04:51:43.163906] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:54.790 [2024-10-15 04:51:43.163919] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:54.790 [2024-10-15 04:51:43.163930] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:54.790 [2024-10-15 04:51:43.163942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:54.790 [2024-10-15 04:51:43.163954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:54.790 [2024-10-15 04:51:43.163964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:54.790 [2024-10-15 04:51:43.163975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:54.790 [2024-10-15 04:51:43.163986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.790 [2024-10-15 04:51:43.163998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:54.790 [2024-10-15 04:51:43.164022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.400 ms 00:27:54.790 [2024-10-15 04:51:43.164033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.185567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.790 [2024-10-15 04:51:43.185780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:54.790 [2024-10-15 04:51:43.185804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.529 ms 00:27:54.790 [2024-10-15 04:51:43.185834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.186416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.790 [2024-10-15 04:51:43.186437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:54.790 [2024-10-15 04:51:43.186449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.545 ms 00:27:54.790 [2024-10-15 04:51:43.186460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.255072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.255144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:54.790 [2024-10-15 04:51:43.255161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.255173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.255241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.255259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:54.790 [2024-10-15 04:51:43.255272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.255283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.255429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.255443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:54.790 [2024-10-15 04:51:43.255456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.255467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.255487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.255500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:54.790 [2024-10-15 04:51:43.255517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.255528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.389345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.389433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:54.790 [2024-10-15 04:51:43.389453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.389465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.496261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.496518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:54.790 [2024-10-15 04:51:43.496546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.496559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.496726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.496748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:54.790 [2024-10-15 04:51:43.496760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.496772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.496985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.497034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:54.790 [2024-10-15 04:51:43.497067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.497087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.497224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.497239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:54.790 [2024-10-15 04:51:43.497251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.497262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.497306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.497319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:54.790 [2024-10-15 04:51:43.497331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.497342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.497407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.497419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:54.790 [2024-10-15 04:51:43.497431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.497442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.497496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:54.790 [2024-10-15 04:51:43.497509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:54.790 [2024-10-15 04:51:43.497520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:54.790 [2024-10-15 04:51:43.497536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.790 [2024-10-15 04:51:43.497686] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7487.389 ms, result 0 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81407 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81407 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81407 ']' 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:57.351 04:51:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:57.609 [2024-10-15 04:51:46.942395] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:27:57.609 [2024-10-15 04:51:46.942521] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81407 ] 00:27:57.868 [2024-10-15 04:51:47.115436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.868 [2024-10-15 04:51:47.267598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.246 [2024-10-15 04:51:48.415122] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:59.246 [2024-10-15 04:51:48.415210] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:59.246 [2024-10-15 04:51:48.564208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.246 [2024-10-15 04:51:48.564280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:59.246 [2024-10-15 04:51:48.564299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:59.246 [2024-10-15 04:51:48.564311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.246 [2024-10-15 04:51:48.564383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.246 [2024-10-15 04:51:48.564402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:59.246 [2024-10-15 04:51:48.564414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:27:59.246 [2024-10-15 04:51:48.564425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.246 [2024-10-15 04:51:48.564452] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:59.246 [2024-10-15 04:51:48.565520] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:59.246 [2024-10-15 04:51:48.565561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.246 [2024-10-15 04:51:48.565573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:59.246 [2024-10-15 04:51:48.565585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.117 ms 00:27:59.246 [2024-10-15 04:51:48.565596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.246 [2024-10-15 04:51:48.568112] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:59.246 [2024-10-15 04:51:48.588434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.246 [2024-10-15 04:51:48.588479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:59.246 [2024-10-15 04:51:48.588497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.355 ms 00:27:59.246 [2024-10-15 04:51:48.588509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.246 [2024-10-15 04:51:48.588599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.246 [2024-10-15 04:51:48.588613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:59.246 [2024-10-15 04:51:48.588625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:59.246 [2024-10-15 04:51:48.588636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.246 [2024-10-15 04:51:48.601264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.246 [2024-10-15 04:51:48.601301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:59.246 [2024-10-15 04:51:48.601321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.550 ms 00:27:59.246 [2024-10-15 04:51:48.601332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.247 [2024-10-15 04:51:48.601448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.247 [2024-10-15 04:51:48.601466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:59.247 [2024-10-15 04:51:48.601480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:27:59.247 [2024-10-15 04:51:48.601491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.247 [2024-10-15 04:51:48.601574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.247 [2024-10-15 04:51:48.601588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:59.247 [2024-10-15 04:51:48.601600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:59.247 [2024-10-15 04:51:48.601611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.247 [2024-10-15 04:51:48.601649] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:59.247 [2024-10-15 04:51:48.607343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.247 [2024-10-15 04:51:48.607389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:59.247 [2024-10-15 04:51:48.607407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.713 ms 00:27:59.247 [2024-10-15 04:51:48.607418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.247 [2024-10-15 04:51:48.607451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.247 [2024-10-15 04:51:48.607469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:59.247 [2024-10-15 04:51:48.607482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:59.247 [2024-10-15 04:51:48.607492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.247 [2024-10-15 04:51:48.607539] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:59.247 [2024-10-15 04:51:48.607565] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:59.247 [2024-10-15 04:51:48.607605] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:59.247 [2024-10-15 04:51:48.607629] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:59.247 [2024-10-15 04:51:48.607725] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:59.247 [2024-10-15 04:51:48.607739] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:59.247 [2024-10-15 04:51:48.607754] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:59.247 [2024-10-15 04:51:48.607768] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:59.247 [2024-10-15 04:51:48.607781] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:59.247 [2024-10-15 04:51:48.607793] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:59.247 [2024-10-15 04:51:48.607805] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:59.247 [2024-10-15 04:51:48.607841] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:59.247 [2024-10-15 04:51:48.607854] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:59.247 [2024-10-15 04:51:48.607865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.247 [2024-10-15 04:51:48.607886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:59.247 [2024-10-15 04:51:48.607897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:27:59.247 [2024-10-15 04:51:48.607908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.247 [2024-10-15 04:51:48.607999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.247 [2024-10-15 04:51:48.608011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:59.247 [2024-10-15 04:51:48.608022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:59.247 [2024-10-15 04:51:48.608033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.247 [2024-10-15 04:51:48.608136] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:59.247 [2024-10-15 04:51:48.608150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:59.247 [2024-10-15 04:51:48.608162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:59.247 [2024-10-15 04:51:48.608173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:59.247 [2024-10-15 04:51:48.608194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:59.247 [2024-10-15 04:51:48.608214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:59.247 [2024-10-15 04:51:48.608225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:59.247 [2024-10-15 04:51:48.608234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:59.247 [2024-10-15 04:51:48.608259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:59.247 [2024-10-15 04:51:48.608268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:59.247 [2024-10-15 04:51:48.608288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:59.247 [2024-10-15 04:51:48.608297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:59.247 [2024-10-15 04:51:48.608317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:59.247 [2024-10-15 04:51:48.608326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:59.247 [2024-10-15 04:51:48.608345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:59.247 [2024-10-15 04:51:48.608355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.247 [2024-10-15 04:51:48.608364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:59.247 [2024-10-15 04:51:48.608374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:59.247 [2024-10-15 04:51:48.608383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.247 [2024-10-15 04:51:48.608393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:59.247 [2024-10-15 04:51:48.608413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:59.247 [2024-10-15 04:51:48.608423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.247 [2024-10-15 04:51:48.608432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:59.247 [2024-10-15 04:51:48.608442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:59.247 [2024-10-15 04:51:48.608451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.247 [2024-10-15 04:51:48.608461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:59.247 [2024-10-15 04:51:48.608470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:59.247 [2024-10-15 04:51:48.608480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:59.247 [2024-10-15 04:51:48.608500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:59.247 [2024-10-15 04:51:48.608510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:59.247 [2024-10-15 04:51:48.608531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:59.247 [2024-10-15 04:51:48.608560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:59.247 [2024-10-15 04:51:48.608571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608581] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:59.247 [2024-10-15 04:51:48.608591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:59.247 [2024-10-15 04:51:48.608603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:59.247 [2024-10-15 04:51:48.608613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.247 [2024-10-15 04:51:48.608624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:59.247 [2024-10-15 04:51:48.608634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:59.247 [2024-10-15 04:51:48.608643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:59.247 [2024-10-15 04:51:48.608653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:59.247 [2024-10-15 04:51:48.608663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:59.247 [2024-10-15 04:51:48.608673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:59.247 [2024-10-15 04:51:48.608685] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:59.247 [2024-10-15 04:51:48.608698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:59.247 [2024-10-15 04:51:48.608715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:59.247 [2024-10-15 04:51:48.608726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:59.247 [2024-10-15 04:51:48.608737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:59.247 [2024-10-15 04:51:48.608748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:59.247 [2024-10-15 04:51:48.608759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:59.247 [2024-10-15 04:51:48.608769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:59.247 [2024-10-15 04:51:48.608780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:59.247 [2024-10-15 04:51:48.608790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:59.247 [2024-10-15 04:51:48.608800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:59.247 [2024-10-15 04:51:48.608811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:59.247 [2024-10-15 04:51:48.608836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:59.247 [2024-10-15 04:51:48.608847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:59.247 [2024-10-15 04:51:48.608858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:59.247 [2024-10-15 04:51:48.608869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:59.248 [2024-10-15 04:51:48.608882] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:59.248 [2024-10-15 04:51:48.608894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:59.248 [2024-10-15 04:51:48.608907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:59.248 [2024-10-15 04:51:48.608918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:59.248 [2024-10-15 04:51:48.608930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:59.248 [2024-10-15 04:51:48.608942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:59.248 [2024-10-15 04:51:48.608954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.248 [2024-10-15 04:51:48.608964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:59.248 [2024-10-15 04:51:48.608975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.873 ms 00:27:59.248 [2024-10-15 04:51:48.608986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.248 [2024-10-15 04:51:48.609041] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:59.248 [2024-10-15 04:51:48.609054] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:02.533 [2024-10-15 04:51:52.000326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.533 [2024-10-15 04:51:52.000423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:02.533 [2024-10-15 04:51:52.000446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3396.787 ms 00:28:02.533 [2024-10-15 04:51:52.000458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.791 [2024-10-15 04:51:52.046869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.791 [2024-10-15 04:51:52.046945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:02.791 [2024-10-15 04:51:52.046966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.100 ms 00:28:02.791 [2024-10-15 04:51:52.046978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.791 [2024-10-15 04:51:52.047154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.791 [2024-10-15 04:51:52.047170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:02.791 [2024-10-15 04:51:52.047182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:02.791 [2024-10-15 04:51:52.047200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.791 [2024-10-15 04:51:52.097577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.791 [2024-10-15 04:51:52.097655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:02.791 [2024-10-15 04:51:52.097673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.402 ms 00:28:02.791 [2024-10-15 04:51:52.097685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.791 [2024-10-15 04:51:52.097782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.791 [2024-10-15 04:51:52.097803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:02.791 [2024-10-15 04:51:52.097834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:02.791 [2024-10-15 04:51:52.097846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.791 [2024-10-15 04:51:52.098724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.791 [2024-10-15 04:51:52.098755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:02.791 [2024-10-15 04:51:52.098768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.753 ms 00:28:02.791 [2024-10-15 04:51:52.098779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.791 [2024-10-15 04:51:52.098848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.792 [2024-10-15 04:51:52.098862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:02.792 [2024-10-15 04:51:52.098878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:02.792 [2024-10-15 04:51:52.098889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.792 [2024-10-15 04:51:52.123553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.792 [2024-10-15 04:51:52.123634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:02.792 [2024-10-15 04:51:52.123654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.672 ms 00:28:02.792 [2024-10-15 04:51:52.123665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.792 [2024-10-15 04:51:52.156878] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:02.792 [2024-10-15 04:51:52.157216] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:02.792 [2024-10-15 04:51:52.157246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.792 [2024-10-15 04:51:52.157260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:02.792 [2024-10-15 04:51:52.157276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.397 ms 00:28:02.792 [2024-10-15 04:51:52.157288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.792 [2024-10-15 04:51:52.179047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.792 [2024-10-15 04:51:52.179122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:02.792 [2024-10-15 04:51:52.179142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.705 ms 00:28:02.792 [2024-10-15 04:51:52.179154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.792 [2024-10-15 04:51:52.199406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.792 [2024-10-15 04:51:52.199704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:02.792 [2024-10-15 04:51:52.199734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.192 ms 00:28:02.792 [2024-10-15 04:51:52.199747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.792 [2024-10-15 04:51:52.218714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.792 [2024-10-15 04:51:52.218975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:02.792 [2024-10-15 04:51:52.219002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.886 ms 00:28:02.792 [2024-10-15 04:51:52.219015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.792 [2024-10-15 04:51:52.220004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.792 [2024-10-15 04:51:52.220033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:02.792 [2024-10-15 04:51:52.220049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.774 ms 00:28:02.792 [2024-10-15 04:51:52.220065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.059 [2024-10-15 04:51:52.320537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.059 [2024-10-15 04:51:52.320657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:03.059 [2024-10-15 04:51:52.320687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.590 ms 00:28:03.059 [2024-10-15 04:51:52.320699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.059 [2024-10-15 04:51:52.336607] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:03.059 [2024-10-15 04:51:52.338409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.059 [2024-10-15 04:51:52.338673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:03.059 [2024-10-15 04:51:52.338704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.627 ms 00:28:03.059 [2024-10-15 04:51:52.338717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.059 [2024-10-15 04:51:52.338938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.059 [2024-10-15 04:51:52.338958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:03.059 [2024-10-15 04:51:52.338977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:03.059 [2024-10-15 04:51:52.338994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.059 [2024-10-15 04:51:52.339083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.059 [2024-10-15 04:51:52.339097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:03.059 [2024-10-15 04:51:52.339109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:03.059 [2024-10-15 04:51:52.339120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.059 [2024-10-15 04:51:52.339151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.059 [2024-10-15 04:51:52.339164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:03.059 [2024-10-15 04:51:52.339175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:03.059 [2024-10-15 04:51:52.339186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.059 [2024-10-15 04:51:52.339244] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:03.059 [2024-10-15 04:51:52.339258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.059 [2024-10-15 04:51:52.339269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:03.059 [2024-10-15 04:51:52.339281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:03.059 [2024-10-15 04:51:52.339298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.059 [2024-10-15 04:51:52.379984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.059 [2024-10-15 04:51:52.380071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:03.059 [2024-10-15 04:51:52.380092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.712 ms 00:28:03.059 [2024-10-15 04:51:52.380115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.059 [2024-10-15 04:51:52.380249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.059 [2024-10-15 04:51:52.380264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:03.059 [2024-10-15 04:51:52.380276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:28:03.059 [2024-10-15 04:51:52.380288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.059 [2024-10-15 04:51:52.382002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3823.393 ms, result 0 00:28:03.059 [2024-10-15 04:51:52.396451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.059 [2024-10-15 04:51:52.412474] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:03.059 [2024-10-15 04:51:52.423023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:03.626 04:51:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:03.626 04:51:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:03.626 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:03.626 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:03.626 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:03.885 [2024-10-15 04:51:53.190222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.885 [2024-10-15 04:51:53.190509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:03.885 [2024-10-15 04:51:53.190537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:03.885 [2024-10-15 04:51:53.190549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.885 [2024-10-15 04:51:53.190593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.885 [2024-10-15 04:51:53.190611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:03.885 [2024-10-15 04:51:53.190623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:03.885 [2024-10-15 04:51:53.190633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.885 [2024-10-15 04:51:53.190654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.885 [2024-10-15 04:51:53.190665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:03.885 [2024-10-15 04:51:53.190676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:03.885 [2024-10-15 04:51:53.190687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.885 [2024-10-15 04:51:53.190755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.526 ms, result 0 00:28:03.885 true 00:28:03.885 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:03.885 { 00:28:03.885 "name": "ftl", 00:28:03.885 "properties": [ 00:28:03.885 { 00:28:03.885 "name": "superblock_version", 00:28:03.885 "value": 5, 00:28:03.885 "read-only": true 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "name": "base_device", 00:28:03.885 "bands": [ 00:28:03.885 { 00:28:03.885 "id": 0, 00:28:03.885 "state": "CLOSED", 00:28:03.885 "validity": 1.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 1, 00:28:03.885 "state": "CLOSED", 00:28:03.885 "validity": 1.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 2, 00:28:03.885 "state": "CLOSED", 00:28:03.885 "validity": 0.007843137254901933 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 3, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 4, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 5, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 6, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 7, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 8, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 9, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 10, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 11, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 12, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 13, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 14, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 15, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 16, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 17, 00:28:03.885 "state": "FREE", 00:28:03.885 "validity": 0.0 00:28:03.885 } 00:28:03.885 ], 00:28:03.885 "read-only": true 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "name": "cache_device", 00:28:03.885 "type": "bdev", 00:28:03.885 "chunks": [ 00:28:03.885 { 00:28:03.885 "id": 0, 00:28:03.885 "state": "INACTIVE", 00:28:03.885 "utilization": 0.0 00:28:03.885 }, 00:28:03.885 { 00:28:03.885 "id": 1, 00:28:03.885 "state": "OPEN", 00:28:03.886 "utilization": 0.0 00:28:03.886 }, 00:28:03.886 { 00:28:03.886 "id": 2, 00:28:03.886 "state": "OPEN", 00:28:03.886 "utilization": 0.0 00:28:03.886 }, 00:28:03.886 { 00:28:03.886 "id": 3, 00:28:03.886 "state": "FREE", 00:28:03.886 "utilization": 0.0 00:28:03.886 }, 00:28:03.886 { 00:28:03.886 "id": 4, 00:28:03.886 "state": "FREE", 00:28:03.886 "utilization": 0.0 00:28:03.886 } 00:28:03.886 ], 00:28:03.886 "read-only": true 00:28:03.886 }, 00:28:03.886 { 00:28:03.886 "name": "verbose_mode", 00:28:03.886 "value": true, 00:28:03.886 "unit": "", 00:28:03.886 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:03.886 }, 00:28:03.886 { 00:28:03.886 "name": "prep_upgrade_on_shutdown", 00:28:03.886 "value": false, 00:28:03.886 "unit": "", 00:28:03.886 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:03.886 } 00:28:03.886 ] 00:28:03.886 } 00:28:04.144 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:04.144 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:04.144 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:04.144 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:04.144 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:04.144 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:04.144 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:04.144 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:04.402 Validate MD5 checksum, iteration 1 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:04.402 04:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:04.661 [2024-10-15 04:51:53.931110] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:28:04.661 [2024-10-15 04:51:53.931229] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81496 ] 00:28:04.661 [2024-10-15 04:51:54.092835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.919 [2024-10-15 04:51:54.209600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.820  [2024-10-15T04:51:56.582Z] Copying: 678/1024 [MB] (678 MBps) [2024-10-15T04:51:57.957Z] Copying: 1024/1024 [MB] (average 670 MBps) 00:28:08.453 00:28:08.453 04:51:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:08.453 04:51:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:10.356 Validate MD5 checksum, iteration 2 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2e1862f958c440870292ee5c0bcbb24e 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2e1862f958c440870292ee5c0bcbb24e != \2\e\1\8\6\2\f\9\5\8\c\4\4\0\8\7\0\2\9\2\e\e\5\c\0\b\c\b\b\2\4\e ]] 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:10.356 04:51:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:10.356 [2024-10-15 04:51:59.713290] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:28:10.356 [2024-10-15 04:51:59.713628] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81557 ] 00:28:10.614 [2024-10-15 04:51:59.882054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.614 [2024-10-15 04:51:59.992566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.516  [2024-10-15T04:52:02.585Z] Copying: 567/1024 [MB] (567 MBps) [2024-10-15T04:52:03.958Z] Copying: 1024/1024 [MB] (average 567 MBps) 00:28:14.454 00:28:14.454 04:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:14.454 04:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4b75dad65c75e9398d7f759a102aed60 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4b75dad65c75e9398d7f759a102aed60 != \4\b\7\5\d\a\d\6\5\c\7\5\e\9\3\9\8\d\7\f\7\5\9\a\1\0\2\a\e\d\6\0 ]] 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81407 ]] 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81407 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81624 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81624 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81624 ']' 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.356 04:52:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:16.356 [2024-10-15 04:52:05.568662] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:28:16.356 [2024-10-15 04:52:05.568793] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81624 ] 00:28:16.356 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 81407 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:16.356 [2024-10-15 04:52:05.739881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.618 [2024-10-15 04:52:05.884941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.553 [2024-10-15 04:52:07.007072] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:17.553 [2024-10-15 04:52:07.007168] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:17.812 [2024-10-15 04:52:07.157213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.812 [2024-10-15 04:52:07.157301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:17.812 [2024-10-15 04:52:07.157324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:17.812 [2024-10-15 04:52:07.157336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.812 [2024-10-15 04:52:07.157432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.812 [2024-10-15 04:52:07.157451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:17.812 [2024-10-15 04:52:07.157463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:28:17.812 [2024-10-15 04:52:07.157474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.812 [2024-10-15 04:52:07.157503] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:17.812 [2024-10-15 04:52:07.158610] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:17.812 [2024-10-15 04:52:07.158649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.812 [2024-10-15 04:52:07.158662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:17.812 [2024-10-15 04:52:07.158676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.153 ms 00:28:17.812 [2024-10-15 04:52:07.158687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.812 [2024-10-15 04:52:07.159284] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:17.812 [2024-10-15 04:52:07.187786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.812 [2024-10-15 04:52:07.187883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:17.812 [2024-10-15 04:52:07.187906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.544 ms 00:28:17.812 [2024-10-15 04:52:07.187918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.812 [2024-10-15 04:52:07.202889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.812 [2024-10-15 04:52:07.202959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:17.812 [2024-10-15 04:52:07.202981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:17.812 [2024-10-15 04:52:07.202994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.812 [2024-10-15 04:52:07.203595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.812 [2024-10-15 04:52:07.203620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:17.812 [2024-10-15 04:52:07.203633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.446 ms 00:28:17.812 [2024-10-15 04:52:07.203645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.812 [2024-10-15 04:52:07.203723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.812 [2024-10-15 04:52:07.203739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:17.812 [2024-10-15 04:52:07.203758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:28:17.812 [2024-10-15 04:52:07.203770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.812 [2024-10-15 04:52:07.203808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.812 [2024-10-15 04:52:07.203858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:17.812 [2024-10-15 04:52:07.203870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:17.812 [2024-10-15 04:52:07.203882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.812 [2024-10-15 04:52:07.203917] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:17.812 [2024-10-15 04:52:07.209036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.812 [2024-10-15 04:52:07.209291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:17.812 [2024-10-15 04:52:07.209322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.136 ms 00:28:17.812 [2024-10-15 04:52:07.209337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.812 [2024-10-15 04:52:07.209407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.812 [2024-10-15 04:52:07.209437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:17.812 [2024-10-15 04:52:07.209457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:17.813 [2024-10-15 04:52:07.209472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.813 [2024-10-15 04:52:07.209550] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:17.813 [2024-10-15 04:52:07.209592] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:17.813 [2024-10-15 04:52:07.209660] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:17.813 [2024-10-15 04:52:07.209695] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:17.813 [2024-10-15 04:52:07.209873] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:17.813 [2024-10-15 04:52:07.209904] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:17.813 [2024-10-15 04:52:07.209930] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:17.813 [2024-10-15 04:52:07.209955] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:17.813 [2024-10-15 04:52:07.209980] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:17.813 [2024-10-15 04:52:07.210002] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:17.813 [2024-10-15 04:52:07.210022] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:17.813 [2024-10-15 04:52:07.210043] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:17.813 [2024-10-15 04:52:07.210058] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:17.813 [2024-10-15 04:52:07.210072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.813 [2024-10-15 04:52:07.210085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:17.813 [2024-10-15 04:52:07.210099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.530 ms 00:28:17.813 [2024-10-15 04:52:07.210117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.813 [2024-10-15 04:52:07.210213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.813 [2024-10-15 04:52:07.210226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:17.813 [2024-10-15 04:52:07.210238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:28:17.813 [2024-10-15 04:52:07.210250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.813 [2024-10-15 04:52:07.210363] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:17.813 [2024-10-15 04:52:07.210378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:17.813 [2024-10-15 04:52:07.210392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:17.813 [2024-10-15 04:52:07.210405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:17.813 [2024-10-15 04:52:07.210433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:17.813 [2024-10-15 04:52:07.210456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:17.813 [2024-10-15 04:52:07.210467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:17.813 [2024-10-15 04:52:07.210478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:17.813 [2024-10-15 04:52:07.210501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:17.813 [2024-10-15 04:52:07.210512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:17.813 [2024-10-15 04:52:07.210549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:17.813 [2024-10-15 04:52:07.210561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:17.813 [2024-10-15 04:52:07.210599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:17.813 [2024-10-15 04:52:07.210610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:17.813 [2024-10-15 04:52:07.210642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:17.813 [2024-10-15 04:52:07.210652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:17.813 [2024-10-15 04:52:07.210663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:17.813 [2024-10-15 04:52:07.210672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:17.813 [2024-10-15 04:52:07.210698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:17.813 [2024-10-15 04:52:07.210709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:17.813 [2024-10-15 04:52:07.210719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:17.813 [2024-10-15 04:52:07.210729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:17.813 [2024-10-15 04:52:07.210740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:17.813 [2024-10-15 04:52:07.210750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:17.813 [2024-10-15 04:52:07.210763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:17.813 [2024-10-15 04:52:07.210773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:17.813 [2024-10-15 04:52:07.210784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:17.813 [2024-10-15 04:52:07.210794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:17.813 [2024-10-15 04:52:07.210814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:17.813 [2024-10-15 04:52:07.210824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:17.813 [2024-10-15 04:52:07.210843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:17.813 [2024-10-15 04:52:07.210892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:17.813 [2024-10-15 04:52:07.210903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210914] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:17.813 [2024-10-15 04:52:07.210926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:17.813 [2024-10-15 04:52:07.210936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:17.813 [2024-10-15 04:52:07.210948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:17.813 [2024-10-15 04:52:07.210964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:17.813 [2024-10-15 04:52:07.210976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:17.813 [2024-10-15 04:52:07.210987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:17.813 [2024-10-15 04:52:07.210998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:17.813 [2024-10-15 04:52:07.211007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:17.813 [2024-10-15 04:52:07.211021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:17.813 [2024-10-15 04:52:07.211033] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:17.813 [2024-10-15 04:52:07.211048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:17.813 [2024-10-15 04:52:07.211061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:17.813 [2024-10-15 04:52:07.211075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:17.813 [2024-10-15 04:52:07.211089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:17.813 [2024-10-15 04:52:07.211101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:17.813 [2024-10-15 04:52:07.211112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:17.813 [2024-10-15 04:52:07.211124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:17.813 [2024-10-15 04:52:07.211136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:17.813 [2024-10-15 04:52:07.211150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:17.813 [2024-10-15 04:52:07.211166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:17.813 [2024-10-15 04:52:07.211178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:17.813 [2024-10-15 04:52:07.211188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:17.813 [2024-10-15 04:52:07.211199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:17.813 [2024-10-15 04:52:07.211210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:17.813 [2024-10-15 04:52:07.211224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:17.813 [2024-10-15 04:52:07.211237] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:17.813 [2024-10-15 04:52:07.211253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:17.813 [2024-10-15 04:52:07.211266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:17.813 [2024-10-15 04:52:07.211279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:17.813 [2024-10-15 04:52:07.211291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:17.813 [2024-10-15 04:52:07.211307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:17.813 [2024-10-15 04:52:07.211320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.813 [2024-10-15 04:52:07.211332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:17.813 [2024-10-15 04:52:07.211349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.022 ms 00:28:17.813 [2024-10-15 04:52:07.211366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.813 [2024-10-15 04:52:07.261199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.813 [2024-10-15 04:52:07.261281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:17.813 [2024-10-15 04:52:07.261308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.838 ms 00:28:17.813 [2024-10-15 04:52:07.261324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.814 [2024-10-15 04:52:07.261422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.814 [2024-10-15 04:52:07.261437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:17.814 [2024-10-15 04:52:07.261450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:17.814 [2024-10-15 04:52:07.261464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.323359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.323455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:18.072 [2024-10-15 04:52:07.323476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 61.852 ms 00:28:18.072 [2024-10-15 04:52:07.323489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.323587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.323600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:18.072 [2024-10-15 04:52:07.323616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:18.072 [2024-10-15 04:52:07.323627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.323853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.323879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:18.072 [2024-10-15 04:52:07.323893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.127 ms 00:28:18.072 [2024-10-15 04:52:07.323905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.323965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.323979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:18.072 [2024-10-15 04:52:07.323992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:18.072 [2024-10-15 04:52:07.324003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.352330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.352411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:18.072 [2024-10-15 04:52:07.352432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.337 ms 00:28:18.072 [2024-10-15 04:52:07.352445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.352698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.352718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:18.072 [2024-10-15 04:52:07.352731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:18.072 [2024-10-15 04:52:07.352743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.394840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.394942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:18.072 [2024-10-15 04:52:07.394965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.104 ms 00:28:18.072 [2024-10-15 04:52:07.394977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.411939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.412019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:18.072 [2024-10-15 04:52:07.412039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.905 ms 00:28:18.072 [2024-10-15 04:52:07.412050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.514028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.514394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:18.072 [2024-10-15 04:52:07.514432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 101.986 ms 00:28:18.072 [2024-10-15 04:52:07.514464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.514757] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:18.072 [2024-10-15 04:52:07.514973] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:18.072 [2024-10-15 04:52:07.515152] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:18.072 [2024-10-15 04:52:07.515339] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:18.072 [2024-10-15 04:52:07.515354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.515366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:18.072 [2024-10-15 04:52:07.515379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.786 ms 00:28:18.072 [2024-10-15 04:52:07.515389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.515537] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:18.072 [2024-10-15 04:52:07.515554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.515566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:18.072 [2024-10-15 04:52:07.515578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:18.072 [2024-10-15 04:52:07.515596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.541183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.541511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:18.072 [2024-10-15 04:52:07.541551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.581 ms 00:28:18.072 [2024-10-15 04:52:07.541578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.557305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.557385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:18.072 [2024-10-15 04:52:07.557417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:18.072 [2024-10-15 04:52:07.557430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.072 [2024-10-15 04:52:07.557586] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:18.072 [2024-10-15 04:52:07.557956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.072 [2024-10-15 04:52:07.557972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:18.072 [2024-10-15 04:52:07.557990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.373 ms 00:28:18.072 [2024-10-15 04:52:07.558001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.639 [2024-10-15 04:52:08.102981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.639 [2024-10-15 04:52:08.103297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:18.639 [2024-10-15 04:52:08.103331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 544.278 ms 00:28:18.639 [2024-10-15 04:52:08.103344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.639 [2024-10-15 04:52:08.109404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.639 [2024-10-15 04:52:08.109453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:18.639 [2024-10-15 04:52:08.109469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.116 ms 00:28:18.639 [2024-10-15 04:52:08.109482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.639 [2024-10-15 04:52:08.109921] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:18.639 [2024-10-15 04:52:08.109947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.639 [2024-10-15 04:52:08.109967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:18.639 [2024-10-15 04:52:08.109980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.426 ms 00:28:18.639 [2024-10-15 04:52:08.109991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.639 [2024-10-15 04:52:08.110027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.639 [2024-10-15 04:52:08.110040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:18.639 [2024-10-15 04:52:08.110052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:18.639 [2024-10-15 04:52:08.110063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.639 [2024-10-15 04:52:08.110105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 553.424 ms, result 0 00:28:18.639 [2024-10-15 04:52:08.110160] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:18.639 [2024-10-15 04:52:08.110395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.639 [2024-10-15 04:52:08.110406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:18.639 [2024-10-15 04:52:08.110417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.237 ms 00:28:18.639 [2024-10-15 04:52:08.110426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.205 [2024-10-15 04:52:08.661451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.205 [2024-10-15 04:52:08.661768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:19.205 [2024-10-15 04:52:08.661802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 550.374 ms 00:28:19.205 [2024-10-15 04:52:08.661853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.205 [2024-10-15 04:52:08.668227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.205 [2024-10-15 04:52:08.668448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:19.205 [2024-10-15 04:52:08.668473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.315 ms 00:28:19.205 [2024-10-15 04:52:08.668485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.205 [2024-10-15 04:52:08.669079] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:19.205 [2024-10-15 04:52:08.669105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.205 [2024-10-15 04:52:08.669117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:19.205 [2024-10-15 04:52:08.669130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.581 ms 00:28:19.205 [2024-10-15 04:52:08.669141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.205 [2024-10-15 04:52:08.669182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.205 [2024-10-15 04:52:08.669194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:19.205 [2024-10-15 04:52:08.669206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:19.205 [2024-10-15 04:52:08.669216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.205 [2024-10-15 04:52:08.669262] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 560.007 ms, result 0 00:28:19.205 [2024-10-15 04:52:08.669315] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:19.206 [2024-10-15 04:52:08.669329] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:19.206 [2024-10-15 04:52:08.669344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.206 [2024-10-15 04:52:08.669356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:19.206 [2024-10-15 04:52:08.669368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1113.597 ms 00:28:19.206 [2024-10-15 04:52:08.669379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.206 [2024-10-15 04:52:08.669424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.206 [2024-10-15 04:52:08.669437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:19.206 [2024-10-15 04:52:08.669449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:19.206 [2024-10-15 04:52:08.669460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.206 [2024-10-15 04:52:08.685406] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:19.206 [2024-10-15 04:52:08.685619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.206 [2024-10-15 04:52:08.685634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:19.206 [2024-10-15 04:52:08.685650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.159 ms 00:28:19.206 [2024-10-15 04:52:08.685662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.206 [2024-10-15 04:52:08.686375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.206 [2024-10-15 04:52:08.686401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:19.206 [2024-10-15 04:52:08.686416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.578 ms 00:28:19.206 [2024-10-15 04:52:08.686426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.206 [2024-10-15 04:52:08.688501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.206 [2024-10-15 04:52:08.688527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:19.206 [2024-10-15 04:52:08.688540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.042 ms 00:28:19.206 [2024-10-15 04:52:08.688552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.206 [2024-10-15 04:52:08.688612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.206 [2024-10-15 04:52:08.688625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:19.206 [2024-10-15 04:52:08.688637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:19.206 [2024-10-15 04:52:08.688648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.206 [2024-10-15 04:52:08.688778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.206 [2024-10-15 04:52:08.688795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:19.206 [2024-10-15 04:52:08.688807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:19.206 [2024-10-15 04:52:08.688831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.206 [2024-10-15 04:52:08.688861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.206 [2024-10-15 04:52:08.688872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:19.206 [2024-10-15 04:52:08.688884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:19.206 [2024-10-15 04:52:08.688895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.206 [2024-10-15 04:52:08.688940] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:19.206 [2024-10-15 04:52:08.688953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.206 [2024-10-15 04:52:08.688964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:19.206 [2024-10-15 04:52:08.688980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:19.206 [2024-10-15 04:52:08.688991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.206 [2024-10-15 04:52:08.689059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.206 [2024-10-15 04:52:08.689072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:19.206 [2024-10-15 04:52:08.689084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:28:19.206 [2024-10-15 04:52:08.689096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.206 [2024-10-15 04:52:08.690640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1535.355 ms, result 0 00:28:19.206 [2024-10-15 04:52:08.706190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.465 [2024-10-15 04:52:08.722240] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:19.465 [2024-10-15 04:52:08.733255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:19.465 Validate MD5 checksum, iteration 1 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:19.465 04:52:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:19.465 [2024-10-15 04:52:08.871505] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:28:19.465 [2024-10-15 04:52:08.871777] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81660 ] 00:28:19.723 [2024-10-15 04:52:09.043441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.723 [2024-10-15 04:52:09.156067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.670  [2024-10-15T04:52:11.747Z] Copying: 582/1024 [MB] (582 MBps) [2024-10-15T04:52:15.940Z] Copying: 1024/1024 [MB] (average 578 MBps) 00:28:26.436 00:28:26.436 04:52:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:26.436 04:52:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:27.813 Validate MD5 checksum, iteration 2 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2e1862f958c440870292ee5c0bcbb24e 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2e1862f958c440870292ee5c0bcbb24e != \2\e\1\8\6\2\f\9\5\8\c\4\4\0\8\7\0\2\9\2\e\e\5\c\0\b\c\b\b\2\4\e ]] 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:27.813 04:52:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:27.813 [2024-10-15 04:52:17.026259] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:28:27.813 [2024-10-15 04:52:17.026628] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81749 ] 00:28:27.813 [2024-10-15 04:52:17.198942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.813 [2024-10-15 04:52:17.319235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.715  [2024-10-15T04:52:19.788Z] Copying: 569/1024 [MB] (569 MBps) [2024-10-15T04:52:21.166Z] Copying: 1024/1024 [MB] (average 578 MBps) 00:28:31.662 00:28:31.663 04:52:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:31.663 04:52:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4b75dad65c75e9398d7f759a102aed60 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4b75dad65c75e9398d7f759a102aed60 != \4\b\7\5\d\a\d\6\5\c\7\5\e\9\3\9\8\d\7\f\7\5\9\a\1\0\2\a\e\d\6\0 ]] 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81624 ]] 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81624 00:28:33.568 04:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81624 ']' 00:28:33.568 04:52:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81624 00:28:33.568 04:52:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:33.568 04:52:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:33.568 04:52:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81624 00:28:33.568 killing process with pid 81624 00:28:33.568 04:52:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:33.568 04:52:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:33.568 04:52:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81624' 00:28:33.568 04:52:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81624 00:28:33.568 04:52:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81624 00:28:34.947 [2024-10-15 04:52:24.265584] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:34.947 [2024-10-15 04:52:24.287406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.287484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:34.947 [2024-10-15 04:52:24.287504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:34.947 [2024-10-15 04:52:24.287515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.287541] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:34.947 [2024-10-15 04:52:24.292131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.292166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:34.947 [2024-10-15 04:52:24.292181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.578 ms 00:28:34.947 [2024-10-15 04:52:24.292193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.292416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.292431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:34.947 [2024-10-15 04:52:24.292443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:28:34.947 [2024-10-15 04:52:24.292454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.293723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.293764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:34.947 [2024-10-15 04:52:24.293778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.252 ms 00:28:34.947 [2024-10-15 04:52:24.293789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.294766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.294801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:34.947 [2024-10-15 04:52:24.294825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.928 ms 00:28:34.947 [2024-10-15 04:52:24.294837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.310657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.310704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:34.947 [2024-10-15 04:52:24.310720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.799 ms 00:28:34.947 [2024-10-15 04:52:24.310731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.318985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.319037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:34.947 [2024-10-15 04:52:24.319052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.226 ms 00:28:34.947 [2024-10-15 04:52:24.319064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.319175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.319190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:34.947 [2024-10-15 04:52:24.319203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:28:34.947 [2024-10-15 04:52:24.319214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.334591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.334640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:34.947 [2024-10-15 04:52:24.334656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.381 ms 00:28:34.947 [2024-10-15 04:52:24.334668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.350025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.350080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:34.947 [2024-10-15 04:52:24.350096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.341 ms 00:28:34.947 [2024-10-15 04:52:24.350106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.365001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.365278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:34.947 [2024-10-15 04:52:24.365305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.876 ms 00:28:34.947 [2024-10-15 04:52:24.365316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.380936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.947 [2024-10-15 04:52:24.381010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:34.947 [2024-10-15 04:52:24.381029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.493 ms 00:28:34.947 [2024-10-15 04:52:24.381040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.947 [2024-10-15 04:52:24.381085] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:34.947 [2024-10-15 04:52:24.381107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:34.947 [2024-10-15 04:52:24.381136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:34.947 [2024-10-15 04:52:24.381149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:34.947 [2024-10-15 04:52:24.381161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:34.947 [2024-10-15 04:52:24.381320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:34.948 [2024-10-15 04:52:24.381335] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:34.948 [2024-10-15 04:52:24.381346] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 76c1b055-8658-4083-b34a-dd57ff62c762 00:28:34.948 [2024-10-15 04:52:24.381358] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:34.948 [2024-10-15 04:52:24.381369] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:34.948 [2024-10-15 04:52:24.381380] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:34.948 [2024-10-15 04:52:24.381401] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:34.948 [2024-10-15 04:52:24.381412] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:34.948 [2024-10-15 04:52:24.381424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:34.948 [2024-10-15 04:52:24.381434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:34.948 [2024-10-15 04:52:24.381443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:34.948 [2024-10-15 04:52:24.381454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:34.948 [2024-10-15 04:52:24.381466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.948 [2024-10-15 04:52:24.381478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:34.948 [2024-10-15 04:52:24.381491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.384 ms 00:28:34.948 [2024-10-15 04:52:24.381506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.948 [2024-10-15 04:52:24.403109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.948 [2024-10-15 04:52:24.403175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:34.948 [2024-10-15 04:52:24.403193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.594 ms 00:28:34.948 [2024-10-15 04:52:24.403205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.948 [2024-10-15 04:52:24.403874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.948 [2024-10-15 04:52:24.403888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:34.948 [2024-10-15 04:52:24.403930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.621 ms 00:28:34.948 [2024-10-15 04:52:24.403941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.207 [2024-10-15 04:52:24.473159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.207 [2024-10-15 04:52:24.473245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:35.207 [2024-10-15 04:52:24.473264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.207 [2024-10-15 04:52:24.473275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.207 [2024-10-15 04:52:24.473348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.207 [2024-10-15 04:52:24.473360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:35.207 [2024-10-15 04:52:24.473379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.207 [2024-10-15 04:52:24.473390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.207 [2024-10-15 04:52:24.473554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.207 [2024-10-15 04:52:24.473569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:35.208 [2024-10-15 04:52:24.473581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.208 [2024-10-15 04:52:24.473593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.208 [2024-10-15 04:52:24.473613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.208 [2024-10-15 04:52:24.473625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:35.208 [2024-10-15 04:52:24.473636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.208 [2024-10-15 04:52:24.473652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.208 [2024-10-15 04:52:24.611451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.208 [2024-10-15 04:52:24.611759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:35.208 [2024-10-15 04:52:24.611790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.208 [2024-10-15 04:52:24.611802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.466 [2024-10-15 04:52:24.722082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.466 [2024-10-15 04:52:24.722392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:35.466 [2024-10-15 04:52:24.722434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.466 [2024-10-15 04:52:24.722446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.466 [2024-10-15 04:52:24.722609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.466 [2024-10-15 04:52:24.722624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:35.466 [2024-10-15 04:52:24.722636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.466 [2024-10-15 04:52:24.722647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.466 [2024-10-15 04:52:24.722704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.466 [2024-10-15 04:52:24.722717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:35.466 [2024-10-15 04:52:24.722728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.466 [2024-10-15 04:52:24.722738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.466 [2024-10-15 04:52:24.722921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.466 [2024-10-15 04:52:24.722937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:35.466 [2024-10-15 04:52:24.722955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.466 [2024-10-15 04:52:24.722966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.466 [2024-10-15 04:52:24.723009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.466 [2024-10-15 04:52:24.723023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:35.466 [2024-10-15 04:52:24.723034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.466 [2024-10-15 04:52:24.723045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.466 [2024-10-15 04:52:24.723098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.466 [2024-10-15 04:52:24.723111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:35.466 [2024-10-15 04:52:24.723122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.466 [2024-10-15 04:52:24.723133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.466 [2024-10-15 04:52:24.723186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:35.466 [2024-10-15 04:52:24.723199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:35.466 [2024-10-15 04:52:24.723210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:35.466 [2024-10-15 04:52:24.723220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.466 [2024-10-15 04:52:24.723368] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 436.629 ms, result 0 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:36.845 Remove shared memory files 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81407 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:36.845 ************************************ 00:28:36.845 END TEST ftl_upgrade_shutdown 00:28:36.845 ************************************ 00:28:36.845 00:28:36.845 real 1m28.528s 00:28:36.845 user 2m0.967s 00:28:36.845 sys 0m22.962s 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:36.845 04:52:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:36.845 Process with pid 74282 is not found 00:28:36.845 04:52:26 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:36.845 04:52:26 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:36.845 04:52:26 ftl -- ftl/ftl.sh@14 -- # killprocess 74282 00:28:36.845 04:52:26 ftl -- common/autotest_common.sh@950 -- # '[' -z 74282 ']' 00:28:36.845 04:52:26 ftl -- common/autotest_common.sh@954 -- # kill -0 74282 00:28:36.845 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74282) - No such process 00:28:36.845 04:52:26 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 74282 is not found' 00:28:36.845 04:52:26 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:36.845 04:52:26 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81878 00:28:36.845 04:52:26 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:36.845 04:52:26 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81878 00:28:36.845 04:52:26 ftl -- common/autotest_common.sh@831 -- # '[' -z 81878 ']' 00:28:36.845 04:52:26 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.845 04:52:26 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:36.845 04:52:26 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.845 04:52:26 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:36.845 04:52:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:36.845 [2024-10-15 04:52:26.323202] Starting SPDK v25.01-pre git sha1 1b0026227 / DPDK 24.03.0 initialization... 00:28:36.845 [2024-10-15 04:52:26.323547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81878 ] 00:28:37.104 [2024-10-15 04:52:26.498185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.362 [2024-10-15 04:52:26.643782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.302 04:52:27 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:38.302 04:52:27 ftl -- common/autotest_common.sh@864 -- # return 0 00:28:38.302 04:52:27 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:38.563 nvme0n1 00:28:38.563 04:52:27 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:38.563 04:52:27 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:38.563 04:52:27 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:38.822 04:52:28 ftl -- ftl/common.sh@28 -- # stores=7c7dc58d-7c96-4f6c-ad15-75c7d104804f 00:28:38.822 04:52:28 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:38.822 04:52:28 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7c7dc58d-7c96-4f6c-ad15-75c7d104804f 00:28:39.081 04:52:28 ftl -- ftl/ftl.sh@23 -- # killprocess 81878 00:28:39.081 04:52:28 ftl -- common/autotest_common.sh@950 -- # '[' -z 81878 ']' 00:28:39.081 04:52:28 ftl -- common/autotest_common.sh@954 -- # kill -0 81878 00:28:39.081 04:52:28 ftl -- common/autotest_common.sh@955 -- # uname 00:28:39.081 04:52:28 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:39.081 04:52:28 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81878 00:28:39.081 killing process with pid 81878 00:28:39.081 04:52:28 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:39.081 04:52:28 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:39.081 04:52:28 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81878' 00:28:39.081 04:52:28 ftl -- common/autotest_common.sh@969 -- # kill 81878 00:28:39.081 04:52:28 ftl -- common/autotest_common.sh@974 -- # wait 81878 00:28:41.615 04:52:31 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:42.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:42.182 Waiting for block devices as requested 00:28:42.182 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:42.442 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:42.442 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:42.442 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:47.764 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:47.764 04:52:37 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:47.764 Remove shared memory files 00:28:47.764 04:52:37 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:47.764 04:52:37 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:47.764 04:52:37 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:47.764 04:52:37 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:47.764 04:52:37 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:47.764 04:52:37 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:47.764 ************************************ 00:28:47.764 END TEST ftl 00:28:47.764 ************************************ 00:28:47.764 00:28:47.764 real 11m8.168s 00:28:47.764 user 13m40.068s 00:28:47.764 sys 1m32.506s 00:28:47.764 04:52:37 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:47.764 04:52:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:47.764 04:52:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:47.764 04:52:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:47.764 04:52:37 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:47.764 04:52:37 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:47.764 04:52:37 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:28:47.764 04:52:37 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:47.764 04:52:37 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:47.764 04:52:37 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:28:47.764 04:52:37 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:28:47.764 04:52:37 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:28:47.764 04:52:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:47.764 04:52:37 -- common/autotest_common.sh@10 -- # set +x 00:28:47.764 04:52:37 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:28:47.764 04:52:37 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:47.764 04:52:37 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:47.764 04:52:37 -- common/autotest_common.sh@10 -- # set +x 00:28:50.310 INFO: APP EXITING 00:28:50.310 INFO: killing all VMs 00:28:50.310 INFO: killing vhost app 00:28:50.310 INFO: EXIT DONE 00:28:50.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:50.878 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:50.878 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:50.878 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:51.136 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:51.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:51.996 Cleaning 00:28:51.996 Removing: /var/run/dpdk/spdk0/config 00:28:51.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:51.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:51.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:51.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:51.997 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:51.997 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:51.997 Removing: /var/run/dpdk/spdk0 00:28:51.997 Removing: /var/run/dpdk/spdk_pid57835 00:28:51.997 Removing: /var/run/dpdk/spdk_pid58070 00:28:51.997 Removing: /var/run/dpdk/spdk_pid58311 00:28:51.997 Removing: /var/run/dpdk/spdk_pid58415 00:28:51.997 Removing: /var/run/dpdk/spdk_pid58471 00:28:51.997 Removing: /var/run/dpdk/spdk_pid58599 00:28:51.997 Removing: /var/run/dpdk/spdk_pid58623 00:28:51.997 Removing: /var/run/dpdk/spdk_pid58833 00:28:51.997 Removing: /var/run/dpdk/spdk_pid58944 00:28:51.997 Removing: /var/run/dpdk/spdk_pid59051 00:28:51.997 Removing: /var/run/dpdk/spdk_pid59179 00:28:51.997 Removing: /var/run/dpdk/spdk_pid59287 00:28:51.997 Removing: /var/run/dpdk/spdk_pid59326 00:28:51.997 Removing: /var/run/dpdk/spdk_pid59363 00:28:51.997 Removing: /var/run/dpdk/spdk_pid59439 00:28:51.997 Removing: /var/run/dpdk/spdk_pid59556 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60014 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60097 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60171 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60187 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60349 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60365 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60519 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60535 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60606 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60628 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60692 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60716 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60916 00:28:51.997 Removing: /var/run/dpdk/spdk_pid60957 00:28:51.997 Removing: /var/run/dpdk/spdk_pid61042 00:28:51.997 Removing: /var/run/dpdk/spdk_pid61230 00:28:51.997 Removing: /var/run/dpdk/spdk_pid61331 00:28:51.997 Removing: /var/run/dpdk/spdk_pid61373 00:28:51.997 Removing: /var/run/dpdk/spdk_pid61841 00:28:51.997 Removing: /var/run/dpdk/spdk_pid61949 00:28:51.997 Removing: /var/run/dpdk/spdk_pid62059 00:28:51.997 Removing: /var/run/dpdk/spdk_pid62118 00:28:52.255 Removing: /var/run/dpdk/spdk_pid62144 00:28:52.255 Removing: /var/run/dpdk/spdk_pid62228 00:28:52.255 Removing: /var/run/dpdk/spdk_pid62869 00:28:52.255 Removing: /var/run/dpdk/spdk_pid62917 00:28:52.255 Removing: /var/run/dpdk/spdk_pid63407 00:28:52.255 Removing: /var/run/dpdk/spdk_pid63511 00:28:52.255 Removing: /var/run/dpdk/spdk_pid63631 00:28:52.255 Removing: /var/run/dpdk/spdk_pid63684 00:28:52.255 Removing: /var/run/dpdk/spdk_pid63710 00:28:52.255 Removing: /var/run/dpdk/spdk_pid63735 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65631 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65778 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65783 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65801 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65842 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65846 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65858 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65908 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65912 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65924 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65969 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65978 00:28:52.255 Removing: /var/run/dpdk/spdk_pid65990 00:28:52.255 Removing: /var/run/dpdk/spdk_pid67391 00:28:52.255 Removing: /var/run/dpdk/spdk_pid67499 00:28:52.255 Removing: /var/run/dpdk/spdk_pid68944 00:28:52.255 Removing: /var/run/dpdk/spdk_pid70316 00:28:52.255 Removing: /var/run/dpdk/spdk_pid70431 00:28:52.255 Removing: /var/run/dpdk/spdk_pid70548 00:28:52.255 Removing: /var/run/dpdk/spdk_pid70674 00:28:52.255 Removing: /var/run/dpdk/spdk_pid70818 00:28:52.255 Removing: /var/run/dpdk/spdk_pid70892 00:28:52.255 Removing: /var/run/dpdk/spdk_pid71051 00:28:52.255 Removing: /var/run/dpdk/spdk_pid71427 00:28:52.255 Removing: /var/run/dpdk/spdk_pid71469 00:28:52.255 Removing: /var/run/dpdk/spdk_pid71932 00:28:52.255 Removing: /var/run/dpdk/spdk_pid72117 00:28:52.255 Removing: /var/run/dpdk/spdk_pid72219 00:28:52.255 Removing: /var/run/dpdk/spdk_pid72343 00:28:52.255 Removing: /var/run/dpdk/spdk_pid72404 00:28:52.255 Removing: /var/run/dpdk/spdk_pid72425 00:28:52.255 Removing: /var/run/dpdk/spdk_pid72726 00:28:52.255 Removing: /var/run/dpdk/spdk_pid72796 00:28:52.255 Removing: /var/run/dpdk/spdk_pid72885 00:28:52.255 Removing: /var/run/dpdk/spdk_pid73316 00:28:52.255 Removing: /var/run/dpdk/spdk_pid73463 00:28:52.255 Removing: /var/run/dpdk/spdk_pid74282 00:28:52.255 Removing: /var/run/dpdk/spdk_pid74432 00:28:52.255 Removing: /var/run/dpdk/spdk_pid74636 00:28:52.255 Removing: /var/run/dpdk/spdk_pid74745 00:28:52.255 Removing: /var/run/dpdk/spdk_pid75070 00:28:52.255 Removing: /var/run/dpdk/spdk_pid75329 00:28:52.255 Removing: /var/run/dpdk/spdk_pid75692 00:28:52.255 Removing: /var/run/dpdk/spdk_pid75895 00:28:52.255 Removing: /var/run/dpdk/spdk_pid76017 00:28:52.513 Removing: /var/run/dpdk/spdk_pid76081 00:28:52.513 Removing: /var/run/dpdk/spdk_pid76213 00:28:52.513 Removing: /var/run/dpdk/spdk_pid76247 00:28:52.513 Removing: /var/run/dpdk/spdk_pid76316 00:28:52.513 Removing: /var/run/dpdk/spdk_pid76522 00:28:52.513 Removing: /var/run/dpdk/spdk_pid76782 00:28:52.513 Removing: /var/run/dpdk/spdk_pid77172 00:28:52.513 Removing: /var/run/dpdk/spdk_pid77585 00:28:52.513 Removing: /var/run/dpdk/spdk_pid78026 00:28:52.513 Removing: /var/run/dpdk/spdk_pid78519 00:28:52.513 Removing: /var/run/dpdk/spdk_pid78661 00:28:52.513 Removing: /var/run/dpdk/spdk_pid78756 00:28:52.513 Removing: /var/run/dpdk/spdk_pid79396 00:28:52.513 Removing: /var/run/dpdk/spdk_pid79465 00:28:52.513 Removing: /var/run/dpdk/spdk_pid79962 00:28:52.513 Removing: /var/run/dpdk/spdk_pid80348 00:28:52.513 Removing: /var/run/dpdk/spdk_pid80834 00:28:52.513 Removing: /var/run/dpdk/spdk_pid80966 00:28:52.513 Removing: /var/run/dpdk/spdk_pid81023 00:28:52.513 Removing: /var/run/dpdk/spdk_pid81087 00:28:52.513 Removing: /var/run/dpdk/spdk_pid81145 00:28:52.513 Removing: /var/run/dpdk/spdk_pid81209 00:28:52.513 Removing: /var/run/dpdk/spdk_pid81407 00:28:52.513 Removing: /var/run/dpdk/spdk_pid81496 00:28:52.513 Removing: /var/run/dpdk/spdk_pid81557 00:28:52.513 Removing: /var/run/dpdk/spdk_pid81624 00:28:52.513 Removing: /var/run/dpdk/spdk_pid81660 00:28:52.513 Removing: /var/run/dpdk/spdk_pid81749 00:28:52.513 Removing: /var/run/dpdk/spdk_pid81878 00:28:52.513 Clean 00:28:52.513 04:52:41 -- common/autotest_common.sh@1451 -- # return 0 00:28:52.513 04:52:41 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:28:52.513 04:52:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:52.513 04:52:41 -- common/autotest_common.sh@10 -- # set +x 00:28:52.772 04:52:42 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:28:52.772 04:52:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:52.772 04:52:42 -- common/autotest_common.sh@10 -- # set +x 00:28:52.772 04:52:42 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:52.772 04:52:42 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:52.772 04:52:42 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:52.772 04:52:42 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:28:52.772 04:52:42 -- spdk/autotest.sh@394 -- # hostname 00:28:52.772 04:52:42 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:53.031 geninfo: WARNING: invalid characters removed from testname! 00:29:19.584 04:53:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:23.772 04:53:12 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:25.678 04:53:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:27.584 04:53:17 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:30.170 04:53:19 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:32.076 04:53:21 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:34.610 04:53:23 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:34.610 04:53:23 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:29:34.610 04:53:23 -- common/autotest_common.sh@1691 -- $ lcov --version 00:29:34.610 04:53:23 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:29:34.610 04:53:23 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:29:34.610 04:53:23 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:29:34.610 04:53:23 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:29:34.610 04:53:23 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:29:34.610 04:53:23 -- scripts/common.sh@336 -- $ IFS=.-: 00:29:34.610 04:53:23 -- scripts/common.sh@336 -- $ read -ra ver1 00:29:34.610 04:53:23 -- scripts/common.sh@337 -- $ IFS=.-: 00:29:34.610 04:53:23 -- scripts/common.sh@337 -- $ read -ra ver2 00:29:34.610 04:53:23 -- scripts/common.sh@338 -- $ local 'op=<' 00:29:34.610 04:53:23 -- scripts/common.sh@340 -- $ ver1_l=2 00:29:34.611 04:53:23 -- scripts/common.sh@341 -- $ ver2_l=1 00:29:34.611 04:53:23 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:29:34.611 04:53:23 -- scripts/common.sh@344 -- $ case "$op" in 00:29:34.611 04:53:23 -- scripts/common.sh@345 -- $ : 1 00:29:34.611 04:53:23 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:29:34.611 04:53:23 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.611 04:53:23 -- scripts/common.sh@365 -- $ decimal 1 00:29:34.611 04:53:23 -- scripts/common.sh@353 -- $ local d=1 00:29:34.611 04:53:23 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:29:34.611 04:53:23 -- scripts/common.sh@355 -- $ echo 1 00:29:34.611 04:53:23 -- scripts/common.sh@365 -- $ ver1[v]=1 00:29:34.611 04:53:23 -- scripts/common.sh@366 -- $ decimal 2 00:29:34.611 04:53:23 -- scripts/common.sh@353 -- $ local d=2 00:29:34.611 04:53:23 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:29:34.611 04:53:23 -- scripts/common.sh@355 -- $ echo 2 00:29:34.611 04:53:23 -- scripts/common.sh@366 -- $ ver2[v]=2 00:29:34.611 04:53:23 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:29:34.611 04:53:23 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:29:34.611 04:53:23 -- scripts/common.sh@368 -- $ return 0 00:29:34.611 04:53:23 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.611 04:53:23 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:29:34.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.611 --rc genhtml_branch_coverage=1 00:29:34.611 --rc genhtml_function_coverage=1 00:29:34.611 --rc genhtml_legend=1 00:29:34.611 --rc geninfo_all_blocks=1 00:29:34.611 --rc geninfo_unexecuted_blocks=1 00:29:34.611 00:29:34.611 ' 00:29:34.611 04:53:23 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:29:34.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.611 --rc genhtml_branch_coverage=1 00:29:34.611 --rc genhtml_function_coverage=1 00:29:34.611 --rc genhtml_legend=1 00:29:34.611 --rc geninfo_all_blocks=1 00:29:34.611 --rc geninfo_unexecuted_blocks=1 00:29:34.611 00:29:34.611 ' 00:29:34.611 04:53:23 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:29:34.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.611 --rc genhtml_branch_coverage=1 00:29:34.611 --rc genhtml_function_coverage=1 00:29:34.611 --rc genhtml_legend=1 00:29:34.611 --rc geninfo_all_blocks=1 00:29:34.611 --rc geninfo_unexecuted_blocks=1 00:29:34.611 00:29:34.611 ' 00:29:34.611 04:53:23 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:29:34.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.611 --rc genhtml_branch_coverage=1 00:29:34.611 --rc genhtml_function_coverage=1 00:29:34.611 --rc genhtml_legend=1 00:29:34.611 --rc geninfo_all_blocks=1 00:29:34.611 --rc geninfo_unexecuted_blocks=1 00:29:34.611 00:29:34.611 ' 00:29:34.611 04:53:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:34.611 04:53:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:29:34.611 04:53:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:34.611 04:53:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.611 04:53:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.611 04:53:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.611 04:53:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.611 04:53:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.611 04:53:23 -- paths/export.sh@5 -- $ export PATH 00:29:34.611 04:53:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.611 04:53:23 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:34.611 04:53:23 -- common/autobuild_common.sh@486 -- $ date +%s 00:29:34.611 04:53:23 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728968003.XXXXXX 00:29:34.611 04:53:23 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728968003.gH2l4m 00:29:34.611 04:53:23 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:29:34.611 04:53:23 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:29:34.611 04:53:23 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:34.611 04:53:23 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:34.611 04:53:23 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:34.611 04:53:23 -- common/autobuild_common.sh@502 -- $ get_config_params 00:29:34.611 04:53:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:29:34.611 04:53:23 -- common/autotest_common.sh@10 -- $ set +x 00:29:34.611 04:53:23 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:29:34.611 04:53:23 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:29:34.611 04:53:23 -- pm/common@17 -- $ local monitor 00:29:34.611 04:53:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:34.611 04:53:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:34.611 04:53:23 -- pm/common@25 -- $ sleep 1 00:29:34.611 04:53:23 -- pm/common@21 -- $ date +%s 00:29:34.611 04:53:24 -- pm/common@21 -- $ date +%s 00:29:34.611 04:53:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728968004 00:29:34.611 04:53:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728968004 00:29:34.611 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728968004_collect-vmstat.pm.log 00:29:34.611 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728968004_collect-cpu-load.pm.log 00:29:35.547 04:53:25 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:29:35.547 04:53:25 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:29:35.548 04:53:25 -- spdk/autopackage.sh@14 -- $ timing_finish 00:29:35.548 04:53:25 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:35.548 04:53:25 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:35.548 04:53:25 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:35.806 04:53:25 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:35.806 04:53:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:35.806 04:53:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:35.806 04:53:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:35.806 04:53:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:35.806 04:53:25 -- pm/common@44 -- $ pid=83602 00:29:35.806 04:53:25 -- pm/common@50 -- $ kill -TERM 83602 00:29:35.806 04:53:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:35.806 04:53:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:35.806 04:53:25 -- pm/common@44 -- $ pid=83604 00:29:35.806 04:53:25 -- pm/common@50 -- $ kill -TERM 83604 00:29:35.806 + [[ -n 5241 ]] 00:29:35.806 + sudo kill 5241 00:29:35.815 [Pipeline] } 00:29:35.828 [Pipeline] // timeout 00:29:35.833 [Pipeline] } 00:29:35.846 [Pipeline] // stage 00:29:35.851 [Pipeline] } 00:29:35.864 [Pipeline] // catchError 00:29:35.872 [Pipeline] stage 00:29:35.874 [Pipeline] { (Stop VM) 00:29:35.885 [Pipeline] sh 00:29:36.167 + vagrant halt 00:29:39.456 ==> default: Halting domain... 00:29:46.063 [Pipeline] sh 00:29:46.342 + vagrant destroy -f 00:29:49.627 ==> default: Removing domain... 00:29:49.897 [Pipeline] sh 00:29:50.176 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:29:50.184 [Pipeline] } 00:29:50.198 [Pipeline] // stage 00:29:50.203 [Pipeline] } 00:29:50.217 [Pipeline] // dir 00:29:50.222 [Pipeline] } 00:29:50.235 [Pipeline] // wrap 00:29:50.241 [Pipeline] } 00:29:50.253 [Pipeline] // catchError 00:29:50.261 [Pipeline] stage 00:29:50.263 [Pipeline] { (Epilogue) 00:29:50.275 [Pipeline] sh 00:29:50.555 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:57.134 [Pipeline] catchError 00:29:57.136 [Pipeline] { 00:29:57.148 [Pipeline] sh 00:29:57.430 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:57.430 Artifacts sizes are good 00:29:57.439 [Pipeline] } 00:29:57.453 [Pipeline] // catchError 00:29:57.464 [Pipeline] archiveArtifacts 00:29:57.470 Archiving artifacts 00:29:57.595 [Pipeline] cleanWs 00:29:57.607 [WS-CLEANUP] Deleting project workspace... 00:29:57.607 [WS-CLEANUP] Deferred wipeout is used... 00:29:57.613 [WS-CLEANUP] done 00:29:57.615 [Pipeline] } 00:29:57.631 [Pipeline] // stage 00:29:57.637 [Pipeline] } 00:29:57.651 [Pipeline] // node 00:29:57.656 [Pipeline] End of Pipeline 00:29:57.699 Finished: SUCCESS